The term GIS is one that tries to balance two very different disciplines, that of Geographic knowledge and processes with information systems, that of the computer systems and processing power that allows the computer provision of maps and spatial analysis that was never possible before. As the software and systems have become more and more sophisticated it has often been the fact that GIS has seemed to be more about the IS component and less about the G. This is especially true with the advent of server software for providing map and analysis functionality that can be deployed on the intranet or internet.
The GIS department had to start hiring and understanding the technical complexities of server installation and web development, whilst in the past it had mainly been concerned with providing the output of requests for maps and spatial analysis. Often this lead to the G and IS components being split across a number of departments and the complexities of any project rising as a case, especially if the IS was outsourced by the company. As the complexity of the systems gets ever greater the pressure on an organizations IT and IS department becomes even greater, whilst pressure for GIS to be ubiquitous within an organization puts increasing cost pressure to provide the maps at ‘Google Speed’.
The question is, how much G do people need and how complex do the IS need to be to support them?
G for all
I remember pouring over esteemed journals and papers in my youth (last century!) whilst people regaled the readers with the exact types of functionality that were required by a system to be a GIS. That software doesn’t do this, it’s can’t be a GIS, our software does it, it’s got GIS on the box, it therefore must be a GIS. Often the premise was to get, what I would call hard-core GIS on the desktop for as many people as possible be that as an install or through a browser with complicated functionality such as editing or complex geographic processing. All of which came in a new interface which required a great deal of training to use, which obviously benefited the training departments of the organizations in question.
One if the benefits of a product like Google Maps or Google Earth is the number of people that have already used it. The reduction of time it takes someone to get up to speed with a product ‘they already know how to use’ to quote an iPad advert is important to organizations that are rolling spatial functionality out to 50 or 100 people, possibly more. Arranging training courses on complex products can be both time consuming and expensive.
This is also the case for how people share information. Having to install and configure complex software for many people who just want to share a map amongst both small department or to a wider group of people without the need to have expensive complex software to maintain and configure, not to speak of any hardware, is a barrier to the take-up and use of geographic information. If this difficulty is taken away from people, then all sorts of people can take a spreadsheet of points, a set of addresses or even a KML file and upload this to a data store that just that.
Google Fusion Tables provides such an environment to do this. It’s not an overly complicated piece of software, it just allows someone to take some spatial data, such as a set of geographies, and upload them into a cloud based data store where the information is rendered in a table like environment. At this point there is the ability to filter, aggregate or link the data to any other and then create a simple visualisation of the data that can be placed on the Google base map or linked into Google Earth. That’s it, not complex configuration of servers, no need to handle security as this can be provided using authentication to groups of Google account users or if you’re sharing non-sensitive information just made public. Everything can be handled within a browser, no need to involve any IS group or outsourced department, sharing power can be given back to the actual providers of the data or to those people who want to play with visualizing the data and not configuring servers.
Sure in the background there is a whole series of IS going on, but the knowledge of uploading and managing security on items is now mainstream enough through sites like Google and Facebook that there is a ready army of new graduates who already know how this is done, indeed this is the way they will expect spatial data to be shared!
To G and beyond!
GIS cover a wide variety of implementations, from viewing data in Google Earth (yes it’s a GIS to some degree) to manipulating features in ArcInfo (which is definitely hard-core GIS!). To say one thing isn’t or another is a matter of Geo Semantics. The more that the complexity of sharing and visualizing spatial information the more it will be used within organizations. The easier it becomes for people the more it will be used, not only by the users but also by the people sharing the data for them to see.
So in the future don’t believe the confusing semantics about whether something is a ‘GIS’ or not, just work out if it has enough ‘G’ for what you need to do. As the complexity of getting or accessing spatial data online is reduced then many more people will be ‘doing’ or ‘using’ GIS whether they realize it or not. That can only be a good thing.