Who should I support?

There are many reasons why a person supports a
football (read soccer for our US friends)club. Often these reasons come from the heart, based upon where a person was born or who had the nicest kit or Manchester United. I support Crystal Palace so I’m a sucker for punishment, but in an amazing turn of events they managed to get promotion this year. In honour of the single season they are probably going to have in the top flight of English football I decided to remove all of the emotion out of supporting a team and decided to write a demo which just used pure distance to determine who to support.

There are many people around the globe who watch English football and many of them wonder who they should support often influenced by such factors as whether the team is successful or play attractive football. Whilst these are valid points in determining which team to support one could argue that equally as important would be to support your local team. Now for many of us this might be obvious, but for a person in Tokyo, Miami or Cairo the choice might be less than obvious. Well for no longer, I give you the who should I support site. A site that takes the emotion and chance out of supporting a club.

This week saw the release of the Google Maps Engine API, which adds some key new functionality to the Google Maps Engine platform for querying geospatial data. Maps Engine allows you to upload massive amounts of data into the Google cloud, the API provides some key functions for querying and editing this data for building of applications like that, or integrating into internal systems or mobile applications using the same security and scalability that is available with all of Googles products.

I also wanted to use this demo for looking at how easy it is to integrate Google Maps with Twitter bootstrap which allows developers to easily enable responsive design onto their web development. The answer is, it’s easy once you add a few tweaks from StackOverflow.

Anyway I hope you enjoy the demo and have fun following your new club!

 

Geo Semantics

image

The term GIS is one that tries to balance two very different disciplines, that of Geographic knowledge and  processes with information systems, that of the computer systems and processing power that allows the computer provision of maps and spatial analysis that was never possible before. As the software and systems have become more and more sophisticated it has often been the fact that GIS has seemed to be more about the IS component and less about the G. This is especially true with the advent of server software for providing map and analysis functionality that can be deployed on the intranet or internet.

image The GIS department had to start hiring and understanding the technical complexities of server installation and web development, whilst in the past it had mainly been concerned with providing the output of requests for maps and spatial analysis. Often this lead to the G and IS components being split across a number of departments and the complexities of any project rising as a case, especially if the IS was outsourced by the company. As the complexity of the systems gets ever greater the pressure on an organizations IT and IS department becomes even greater, whilst pressure for GIS to be ubiquitous within an organization puts increasing cost pressure to provide the maps at ‘Google Speed’.

The question is, how much G do people need and how complex do the IS need to be to support them?

G for all

I remember pouring over esteemed journals and papers in my youth (last century!) whilst people regaled image the readers with the exact types of functionality that were required by a system to be a GIS. That software doesn’t do this, it’s can’t be a GIS, our software does it, it’s got GIS on the box, it therefore must be a GIS. Often the premise was to get, what I would call hard-core GIS on the desktop for as many people as possible be that as an install or through a browser with complicated functionality such as editing or complex geographic processing. All of which came in a new interface which required a great deal of training to use, which obviously benefited the training departments of the organizations in question.

image One if the benefits of a product like Google Maps or Google Earth is the number of people that have already used it. The reduction of time it takes someone to get up to speed with a product ‘they already know how to use’ to quote an iPad advert is important to organizations that are rolling spatial functionality out to 50 or 100 people, possibly more. Arranging training courses on complex products can be both time consuming and expensive.

This is also the case for how people share information. Having to install and configure complex software for many people who just want to share a map amongst both small department or to a wider group of people without the need to have expensive complex software to maintain and configure, not to speak of any hardware, is a barrier to the take-up and use of geographic information. If this difficulty is taken away from people, then all sorts of people can take a spreadsheet of points, a set of addresses or even a KML file and upload this to a data store that just that.

G, without the ISimage

Google Fusion Tables provides such an environment to do this. It’s not an overly complicated piece of  software, it just allows someone to take some spatial data, such as a set of geographies, and upload them into a cloud based data store where the information is rendered in a table like environment. At this point there is the ability to filter, aggregate or link the data to any other and then create a simple visualisation of the data that can be placed on the Google base map or linked into Google Earth. That’s it, not complex configuration of servers, no need to handle security as this can be provided using authentication to groups of Google account users or if you’re sharing non-sensitive information just made public. Everything can be handled within a browser, no need to involve any IS group or outsourced department, sharing power can be given back to the actual providers of the data or to those people who want to play with visualizing the data and not configuring servers.

Sure in the background there is a whole series of IS going on, but the knowledge of uploading and managing security on items is now mainstream enough through sites like Google and Facebook that there is a ready army of new graduates who already know how this is done, indeed this is the way they will expect spatial data to be shared!

To G and beyond!

image GIS cover a wide variety of implementations, from viewing data in Google Earth (yes it’s a GIS to some degree) to manipulating features in ArcInfo (which is definitely hard-core GIS!). To say one thing isn’t or another is a matter of Geo Semantics. The more that the complexity of sharing and visualizing spatial information the more it will be used within organizations. The easier it becomes for people the more it will be used, not only by the users but also by the people sharing the data for them to see.

So in the future don’t believe the confusing semantics about whether something is a ‘GIS’ or not, just work out if it has enough ‘G’ for what you need to do. As the complexity of getting or accessing spatial data online is reduced then many more people will be ‘doing’ or ‘using’ GIS whether they realize it or not. That can only be a good thing.

Apps or Sites?

image

Part of me chuckled at the so called hack that affected Twitter today, not that something like this couldn’t affect any site (although given the simple and well known nature of the attack, it really shouldn’t have hit a site like Twitter) but it did remind me of the days in the early 00’s when this sort of thing was common place and the sort of problems we all had to face when coding sites in that era.

What did strike me was whilst I did notice it (whilst random JavaScript tweets are always fun doing the same one over and over again is sort of labouring any joke) I wasn’t affected by it. Why because I wasn’t accessing the Twitter website only consuming the feed from within an app, in my case TweetDeck.

Saved by TweetDeck

I was sort of surprised this week when I heard that 70% of people still use the Twitter website to send and read Tweets. I mean, Wired this month had a whole article bemoaning the death of the web, hasn’t Twitter read that and immediately shut down the home page. Hmm, no, in fact they just released a whole bunch of new functionality (which I can’t yet use, damn them) that can only be accessed via the website, just in time for the hack to emerge.image

Wired do have a point though, more and more people are buying applications for their phones, as smart phones become cheaper and cheaper more and more people will buy apps, just as more and more people will get access to the internet for web browsing. Applications will use the old ‘internet’ for services for the applications on their phone and the ‘web’ will go back to being one of the protocols used on it, that being HTML over HTTP. 

The thing about apps is that they don’t suffer the same attack profile as a web site, when information is mainly entered using an HTML form then that’s where people will look to attack. It’s harder to attack a series of apps that use a data feed, unless you can corrupt the feed in some way, as they usually will display the data in its own way, usually not using direct HTML or even in a browser.

Of course you could be using a compromised application, either downloaded onto a PC from an untrustworthy source or side loaded onto an Android or jail-broken iPhone in that case don’t say I haven’t imagewarned you. In fact the careful cultivation of the App Store under iTunes and to a slightly lesser degree the Android Market place adds that little bit more protection to users than the wilful installation abandon people have on their home (and sometimes work) PC’s (and Mac’s and Ubuntu boxes, but as I said who’s bothering to write a virus for those relatively paltry level of users /jk!).

Patched Apps

image The fact that Twitter patched the XSS issue in relatively short order is one of the main areas where the web works well. The ability to roll out a patch to millions of users at once be it a patch or new features, after thorough testing of course, is only really possible with application that leave no trace on the local machine. Cloud based applications not only protect your data from hardware failures but they can also be patched or upgraded without you having to do anything. Now I know some people will not like this, the same sort of people who still use OS/2 because they don’t understand these new fangled operating systems.

Desktop and mobile applications require an upgrade cycle because they rely on you installing something on a machine. On a mobile application this can be more arduous as they rely first on the developer getting the new application checked by the store it’s being delivered by, and then you have to be notified by the store that a new version is available, finally you have to actually install it.

On the web once it’s passed the requisite tests, it’s just there. Updated lazily in the background or when you next log into the website.

Apps or Sites, your call as long as it’s the cloud.

I read a comment the other day, every time I open an on premise application or use an on premise server to create data, I take a risk with my data. Every time I use cloud services for all sorts of tasks I know it’s not quite as whizzy as on premise applications or servers, but I know if my machine or server dies it’s still there. All I need is another browser and I’m up and running again. No need to install an app, no need to worry about the operating system either mostly these days.

image It works for me, I for one am happily replacing my offline apps for online ones when I can. Sure I still use some installed applications, I still love Live Writer for blogging for instance, Picasa for managing my photos and Google Earth for well just looking at my house from space, but those are my last few on premise applications I use at home (that aren’t games) and Google Earth and Live Writer are conduits for an online services.

I could do the same with my photos if I ever got the time, and there is the crux, as it becomes easier and easier to move data and information to the cloud, or if it has only ever resided there for many digital natives, then more people will and hopefully will be better off because of it.