Gove 2.0 to Gov 2.1 (Gov 3.0)

It’s interesting to note how governments are rallying around the idea of Gov 2.0 where social networking becomes part of how governments interact with their citizens.  And now, when the ink is just drying on some of the Gov 2.0 initiatives, Gov 3.0 is taking the stage.

So what are the comparisons and where does Technology fit in to make this all happen?

This is Gov 2.0 according to a Gartner Definition.

Government 2.0 has seven main characteristics:

  • It is citizen-driven.
  • It is employee-centric.
  • It keeps evolving.
  • It is transformational.
  • It requires a blend of planning and nurturing.
  • It needs Pattern-Based Strategy capabilities.
  • It calls for a new management style.

Of as Mr. Di Maio puts it in a more succinct format in this post:

Government 2.0 is the use of IT to socialize and commoditize government services, processes and data.

So with points like “it keeps evolving” you’d think that would be the last one because Gov 2.0 will keep becoming what it needs to be.  Well, all except for one part. That is the data centric view of government.  So according to another blogger in this post, Gov 3.0 adds this:

Gov 3.0 kicks-off when Governments start publishing Open Data using the semantic web standards (RDF) .

NOTE: for those of you who can’t memorize all the acronyms used on the WWW, RDF is the Resource Description Framework as specified by the WC3 and can be read about here: http://www.w3.org/RDF/

I suppose you could call this Gov 2.0++ Or maybe Gov 2.1 because it’s not really that much of a shift from what we expected from Gov 2.0. We’ve just moved beyond the commoditization of Government services, to these organisations becoming data providers for citizens and third parties.

After all, isn’t this what most citizen facing government agencies are supposed to do? There are two main functions of the Government provider organisations (as opposed to the Policy organisations) in this context. They provide Services to citizens, and they provide Data to citizens.

OK so let’s take the Gov 2.0 base of Government being social, and providing online services, and commoditizing these services. What would it look like? We see a lot of government organisations with Facebook pages, twitter accounts and their own blogs. Excellent so we’re all socialed up. But what about the data as a service?

Data As A Service

Right now, it’s hard for government organisations to allow people access to their internal data and systems to pull data out that they could make use of. They have all the data, but it’s not always well organised, or easily accessible because it’s all clumped together with the sensitive data that they can’t allow access to.

So while sites like Data.gov are working well in the US, I believe it is only the beginning. There is a lot of groundwork behind being able to do this. For example, a strong data classification and handling policy has to be in place. While most agencies have a good classification system, the handling isn’t so good. All of the data, regardless of classification is stored in a common database farm or mainframe.  This makes extracting the citizen viewable data problematic.

That is not an insurmountable obstacle though and clearly government organisations are doing it. Here is an excerpt from the Data.gov site:

13 Other nations establishing open data
24 States now offering data sites
11 Cities in America with open data
236 New applications from Data.gov datasets
258 Data contacts in Federal Agencies
308,650 Datasets available on Data.gov

Another aspect is ensuring that the data is in a format that is consumable by third party applications so that citizens and researchers can view, analyse and report on the data in a way that is useful to them. This also meant that the data needs to be regularly updated. The Australian version of Data.gov  (data.australia.gov.au)is an example where the data was put there in an difficult to consume format (in most cases .pdf) and in most cases has not been updated for 3 or 4 years.

How Do we Do It?

So in order to get data that is useful to citizens and third party applications, we need a good Government IT Data repository. This repository needs to have the following characteristics:

  1. Establish a clear data classification policy
  2. Easily accessible from the internet and any device
  3. Accessible in a format that facilitates automated systems
  4. Accessible via common open standards
  5. Updated frequently and easily
  6. Reliably accessible

There are other’s but they are niche cases. These core 5 characteristics of a government data repository would service most purposes. So how do we do this? How can we create these Gov 3.0 data repositories today? Enter Cloud data services.

Establish a Clear Data Classification Policy

In the reality of the world Governments will never tell citizens everything they know. There will always be data that will remain behind closed doors and redacted documents. In order to be able to smoothly and efficiently publish data to the web, the data has to be classified so that the proper data is published and the secret data stays secret.

Most large government organisation already have data classification, retention and destruction policies. These policies have to be implemented for electronic data and systems that publish data need to be able to identify and filter data. This is most easily accomplished by using things like views, and filtered permissions on databases so that only data that is viewable by the restricted account is allowed to be accessed by that account.

This is again, something we’ve been dealing with and implementing for some time.

Easily accessible from the internet and any device

This is perhaps the characteristic that will allow the explosion of data to citizens. With the proliferation of mobile devices, and apps for those devices, accessing the data easily means more people will access it. Let’s face it, all of these devices now have internet connectivity, so making this data available on the internet is the obvious pathway. But we don’t always want to expose our servers to the Internet. With Cloud based data services like SQL Azure, we don’t have to. We can put the data in the Cloud and the anonymous uses on the internet aren’t touching our network anymore. So this makes it easily accessible, and keeps potential hackers off of our network.

Accessible in a format that facilitates automated systems

Let’s face it, most citizens don’t look at raw data, spread sheets or kilometre long lists of stats. They will be viewing the data through some form of application, charting system, or other data driven mechanism. So this means that the closer we can get to XML or relational storage of this data, the better. Most applications written today, and certainly any mobile device application, can consume XML or relational data. This is probably how the systems inside the organisation created the data in the fist place so why pump it out into a PDF when users may want to actually import the data for analysis and creative interpretation?

This is another area where storing the data in the cloud would facilitate consumption by automated systems. With a small service layer over the top of the data that can serve up the data as XML, ATOM, JSON or other suitable format, we’re ready to go. But what if we wanted to be a bit more basic with the data and give users access to something a little lower level?

Accessible via common open standards

The best thing to do is to stick to common and easily implementable standards. In this case, OData. Many data services are looking toward OData as their access interface. Using open standards like this makes ease of consumption of the data a huge advantage for the data. 

Updated Frequently and Easily

Unless you are writing a historical document, data isn’t much good if it stagnates.  It must be kept current for it to remain relevant to people. Keeping data relevant and updated represents a challenge. How do you do that without employing an army of people to maintain the data store? Even if you had an army, how to you enable them to write out to this public data repository in a safe and secure manner without duplicating everything you’re already doing in-house? There are lots of examples of data repositories that have been stood up, and then stagnated. The data.gov.au site which has been in beta for the past few years, and is full of spread sheets and PDF files is an examples of such a repository.

Ideally ,we’d like to use data we’ve already collected organised and filtered and somehow make that available to the public on this data repository without any extra effort or processing on our part. So the agency will have to either implement a filtered data sync system (perhaps through linked DB Servers) , or the data entry system needs to push the data to the cloud as well as internal stores. 

Reliably Accessible

One of the obvious benefits to cloud based systems is that cloud providers have built their data centres to support internet scale availability and scale. Rather than having to establish, maintain, upgrade, and manage your own data centre to handle peak loads, and failover, etc. is to put it on technology that has been built from the ground up to support such a scenario. Keep in mind that we are talking about data that is designed to be consumed by the public. It is not secret, or sensitive so putting it on an public cloud infrastructure doesn’t represent much of a problem where privacy or data sovereignty is concerned.  With cloud based services, you get scale, failover, maintenance and management all for your subscription fee. Your organisation does have to deal with extra employees, hardware refreshes, patching, OS upgrades, none of it.

Summary

When it comes down to it, the next phase of Gov 3.0, data publication, is on the verge of becoming mainstream. The technology and facilities to enable data to the public to encourage government accountability, third party usage of public data, and dissemination of data is available right now. Gov 3.0 can become a reality in the very near future. We need to start making concerted efforts to get government to publish data in a timely manner. This will not only encourage government accountability as it did in the UK and some states in the US, but make people feel closer to their government and more part of it, not just subject to it.