Freemercialism with a Spudger?

Let’s start this post off with a simple question. You do know what a spudger is don’t you? What you don’t and more importantly you don’t know how you could use one to take apart an iPad. Well you’ve come to  imagethe wrong place to find out what and how. You can find the right place to do that at iFixit who are the self named ‘free repair manual that you can edit’ but actually are like a parts catalogue that shows you what you can do with technology and then gives you a link to buy the widgets that you need to do said task from their online shop. In this particular case, pun intended, they showed how to take apart an iPad using the implement in question, exhibit A one spudger.

This is a classic case of cross-subsidy of one product with another, the giving of a free guide encourages you to buy a new (you don’t already have one do you?) tool in order to perform the task. It’s one of many image ways that companies are able to provide free software that for many users have absolutely no cost. Slightly different from completely free where there is no commercial gain by the supplier at all, these economic models have been growing on the web more and more since it started but was not begun there. The 2008 Wired article “Free! Why $0.00 is the future of business” gives an excellent overview about where free comes from and why it’s becoming increasingly important. In an almost prophetic manner, that article and the accompanying book “Free: The Future at a Radical Price”, expound on the nature of how you can seemingly get something for nothing, how people now expect it especially with software and services on the web and how as commercial companies you have to develop business models to embrace and/or compete with it. image In the post bank bust world, with central and local government agencies, especially in the UK, having limited capital resource on which to blow on massive IT projects, free might not become part of the solution, it might become the only solution.

The book is itself free online and can be obtained here. It’s interesting to see how that the unabridged audio book is actually free also, but the abridged version costs money. Obviously anyone can read the book verbatim onto tape (old school) but the work it takes to create a meaningful abridged version contains value and is therefore costs money. Value that is in terms of the time is has taken to edit the book down and also the value it has to the attention challenged time poor iPod carrying commuter who can’t concentrate for six hours on anything. This is the same for the dead tree version also; the payback for being able to fall asleep with your new paper book in the bath is the cost of the pulping, printing and possibly delivery.

Free OS Data, finding a freemercial model.

In the GIS Industry in the UK, whilst there is a plethora of good software that is available for nothing (see here) it has always been the cost of data that has been one of the main talking points around the imageindustry. The recent freeing of some of the data from the Ordnance survey, I know this makes it sound  like some sort of mystical quest that is because in some ways it has been, has now provided a good deal of authoritative spatial data for base maps, base geographies as well as gazetteers. Now that this data is ‘free’ it’s going to be interesting to see how people will try and add value to the datasets to realise value in the market place.

One of the main reasons behind freeing the data up was to encourage the use of geographic information within the general commercial landscape outside the Universities, Utilities and Governmental organisations that have been it’s natural home. How people will get access to this information is the next challenge, many of the business models outlined in Chris Anderson’s book will be applied to the delivery and usage of this data. Some will succeed and some will fail, but it will be interesting to see how many people outside the traditional geo-markets will be able to get access to this data and how they will imageinteract with it for nothing using widely available tools, which themselves are free.

Who Pays?

Also it will be interesting to see how businesses can afford to cross subsidise this access, how they will be able to create money out of such offerings. in the past just the value add of supplying the data used to be enough to justify a fee, in today’s market, that might no longer be enough for many people, or at least some access should be available for nothing. One thing is for certain, in the current cash strapped world, there might not be many alternatives many people will want, or can afford, to start with.

Discussing Cricket with an Alaskan

Whilst this post might have more to do with a game played with leather and willow to a guy who lives in a part of the world that could only accommodate ice cricket for most of the year, well to be honest the UK isn’t much better for cricketing weather either but that’s beside the point, it’s actually about this years ESRI’s Developers Summit which I was lucky enough to attend this year in Palm Springs.

To ArcGIS 10 and beyond!

image The 2010 #DevSummit was a cornucopia of new technologies and online systems rolling out towards the release of ArcGIS 10. Even though I work for a distributor there was still lots of new stuff to see especially around the online world through use of Amazon for hosted ArcGIS Server (pricing to be confirmed), you can see an overview imagefrom the Business Partners Conference here and the development of the aforementioned arcgis.com (currently not accessible, but will be real soon and has the new ArcGIS Explorer Online) which you can see two videos of here (BPC) and here (DS).

The iPhone SDK got quite a lot of slide-time in both the plenary here as well as a number of sessions. I wanted to make time to have a look, I have an iPhone but no Mac, but there were so many other interesting sessions that I never had a chance, although I can now catch up online.

Online all the time.

Due to the nature of today’s internet the videos of the developers summit were online before my flight had dropped me back at Heathrow. So you too can follow mostly the same schedule that I followed, minus the face to face discussions with Development leads, Service Leads, Product Managers and other Distributors that make conferences like these invaluable and allow you to plan your development strategies for months to come. Some of the sessions I say were as follows.

  • The plenary : lots of videos here. Digs into all the different areas of the upcoming product.
  • Using the ArcGIS Server REST API’s : I can’t get enough about the REST API, the variety of new geometry functions provided
  • An Overview of the ArcGIS API for Microsoft Silverlight/WPF : I’ve been doing lots of Silverlight development recently and whilst many of the cool kids went to the Flex sessions, I checked that I wasn’t doing anything insane by going to some of the Silverlight ones.
  • Python Scripting for Map Automation in ArcGIS 10 : I like the way you can analyze map documents using arcpy! In fact I like most of the stuff you can do with arcpy, now if only someone would write me an avenue wrapper for it and I’d be good.
  • Advanced Map Caching Topics : Lot’s of tips about how to create efficient caches, how caches will change in ArcGIS 10 and the new mixed mode image format, watch and cache.

Unfortunately some of the interesting ones aren’t up there yet, especially the one from the prototype lab with Augmented reality and Microsoft surface demo’s which looked quite slick (and had one of the biggest overhead camera’s I had ever seen), David Chappell’s keynote going through Cloud Platforms (a must watch if you haven’t heard him go through this before) and finally there was another Silverlight session where they showed Windows Phone 7 integration with the Silverlight API. If I see them post them online I’ll update the list above.

So what’s with the title?image

Anyway the title of this piece is all about the benefits about attending a conference far away from home, the opportunity to talk to people face to face about technology, something that is not possible when your watching the talks over the internet. In this case, we were out for dinner one night, when a colleague turns up with some poor unsuspecting fellow he’s imagemet at the conference (@jdoneill from Girdwood, AK), as is the way with conversations with me the subject turned on to cricket and then the floodgates opened.

Although we never did discuss the merits of leg theory I did try and cover all the other aspects of the game, he even managed to seem interested for most of the conversation as well, he was a baseball fan so the compare / contrast, mutual appreciation of each others stat laden sport ensued. If you ever want to image read such a comparison from an English point of view then I’ll point you to this book.

As an aside we won’t mention about the guy he turned up with the next night. Remember, what happened in Palm Springs, stays in Palm Sp
rings, well apart from the technical information that is.

Watermarking, WMS and maybe other things beginning with W.

I should caveat this post with the often used phrase, ‘don’t try this at home kids’. When tinkering with the guts of any system and modifying the information being sent to and from a service by ‘hacking’ into the request pipeline of a message your opening up a whole can of performance and stability worms that need a great deal of testing under load to understand the direct effect on the scalability of any site.

A Simple Questionimage

This post is based upon a question I had with a customer at our recent DeveloperHub conference in Birmingham. He asked how it would be possible to watermark an image that had been served from a request to a WMS service. Ed has given an excellent overview about why a customer might want to watermark their images here and some methods to do it. For this query though there was a need to do the watermarking at the server level and not the client, as you don’t want to restrict access to only those systems that have been modified to adopt whatever watermark solution you have adopted.

With WMS you also to need to make sure you don’t force a client process a response that is not compliant with the OGC WMS specification. So you end up with the need to do some invisible modification of the request or response in order to handle the addition of a watermark without any client realising anything has happened.

The WMS Request / Response Cycle

It’s worth taking a brief aside here to look at the WMS request / response cycle. WMS Services can be simply called from a URL as stated in the OGC specification (here) using a HTTP GET. Depending upon the type of operation you are trying to perform the parameters for the URL will vary. In our case we are most interested in the operation that requests maps, unsurprisingly enough the GetMap request. This uses parameters to control the location of the area to return a map of, the layers to be displayed and the format of the image to be returned.

Once the request has been processed the response is in the format of a image is returned as binary to the client. In a web browser that gets placed into an image for display. It’s this binary image that we are able to edit as it goes through to watermark.

Why not burn it in? image

One question that needs to be answered is that of why you don’t just create a tile cache with an image already burnt in. This would be the most ‘performant’ solution as it front loads any of the processing away from the actual request by the user, this increases the response time but leaves a cache that can possibly only be used for one task. Indeed with more than one client requiring more than one type of copyright notice, or image overlay, then each would possibly need their own set of tiles, or own service.

Alternatively, you could have a map service with a layer which contains any of the watermarking details. You can search the WMS request string for the inclusion of this layer, if it’s not there then you can always add it later. This is fine, but it means you are actually messing with the request that’s being made by the client, which could possibly cause for bugs to be introduced into any application making the requests.

A more flexible solution, albeit possibly less performing, would be to handle the addition of any information over the top of the image at a stage of the request where it can be applied post the actual creation of any map. In terms of ArcGIS this would be after the request has come in, been processed by the map service and then return the clear image, as a binary image object.

A Pipeline Solution

The last method was the answer I actually gave at the event. This was to intercept the response to the WMS service and stamp the returned image with the required watermark, either textural or image based. But how to achieve this, the serving of the image from ArcGIS is handled deep within the SOC process which was untouchable, what wasn’t untouchable was the request/response pipeline in the web server, in my case IIS. In the past this might have required the writing of some sort of ISAPI filter to hook into this pipeline, but since .NET came along it became possible to write a HTTPModule to do the same.

The HTTP Module allows you to hook into public events in the request / response pipeline. Specifically the BeginRequest and EndRequest events, which allow you to check the content of a request before it’s forwarded on to ArcGIS Server and process the returned image that is the result from ArcGIS server, before it’s returned to the client. This pipeline can be simply shown in the following diagram.

 image

Bringing it all together

In order to get the application to run, and to be able to debug it (especially in IIS6 which can only work vsprojectwith files processed by the asp.net worker process) you need to create a handler that maps to the arcgisservices directory within your ArcGIS install (see why I say don’t do this at home!). The easy way of doing this is creating a visual studio project within that directory (as you an see in from the VS 2008 project to the right).

Once the solution is in the right place you can update the existing web.config within that directory. It will already contain the ESRI Handler and Module details that are needed for the operation of the ArcGIS server services, by placing an entry for a new module after the existing ones will allow you to hook into the pipeline before and after the ESRI modules (remember this could seriously damage your ArcGIS Server health, use with caution on a test machine before letting it anywhere near production). The entry would be similar to that given below.

wbcnfgOnce we have these elements in place we can add our class with the IHttpModule interface. You can see how to do this in the example at the MSDN site for the creation of a custom HTTP module.

Hooking into the ArcGIS Server Requestimage

In order to perform the watermarking task, its necessary to perform a number of steps, before and after th e request. Where we get involved is in the BeginRequest event handler, this gets fired once a request is made to ArcGIS. In any system it’s good to only do processing of requests when needed, therefore being able to test that a request to ArcGIS Server is for a WMS map is necessary. This can be done by converting the incoming request stream to a string and parsing that, code to be found here.

At this point our watermarking service could perform no end of housekeeping, checking the type of watermark to apply to a specific service or if indeed it is to be applied at all. At this point it also might be good to read any image to be applied to the response and add it into a cache layer (if we do this for lots of images we don’t want any disk access slowing us down more than once). We are now set to let the request filter down the stack to ArcGIS for processing and we can wait for the EndRequest event handler to fire and for use to get down and dirty with the WMS response.

In next week’s episode – Hooking into the ArcGIS Server Response

image It’s at this point I realise that I’ve written another 1000+ words on something that started as a simple question and that having to read much more in one go might cause you to slowly lose the will to live. In order to save you at this point I’ll save the next part, taking the response and applying the watermark till another post. Probably after the ESRI Developers Summit where I know doubt will be shown a better way of doing this.

PS: What no code?

So you might be thinking where my sample is, how you can get access to it. Well, whilst I’ve provided all the tools to write this application, they haven’t been tested especially for use at scale. Modifying the pipeline of the ArcGIS is not to be taken lightly. The amount of work to actually do this isn’t very hard, it’s almost all provided with samples from MSDN and like I did, I would start by reading the custom HTTPModule section on that site and good luck!

The Lure of Easy

imageThe other day I built a computer almost from scratch. I can admit it, I can nerd it with the best of them when pressed, ok I don’t even need to be pressed. I had a bunch of components lying around, a not too old processor, a bunch of fast RAM and a laptop hard drive all I needed was a case. That was easy to rectify as I’ve always fancied building a little PC and Shuttle do some excellent barebones machines. Now the premise of this post is not the coolness of my new computer (although it is quite nice) but the ease at which it took to build.

When I was Young.

the internet in the 1970's When I was young and the ‘internet was all fields’ I remember building many a machine, both in and out of work, I remember saving my cash for the components, carefully making sure I didn’t bend anything when I slotted processors into motherboards and affixed strange looking fans to the top. I remember screaming when one of the components didn’t work and whole machine failed to boot. I remember returning complete orders and vowing never to build another computer again. But, the lure is too much for something’s and time can heal all wounds, even those inflicted by bad memory modules.

I Haz PowrToolNow whilst I was away from the field of home brew machines a number of things have happened, component prices have reduced, hardware is much more modular and available, I have an electric screwdriver (my only power tool I might add) and I can buy ready small machines with integrated motherboards at every online store. Now what does this add up too? An ability to assemble a machine in under 30 minutes, from start to end. I was shocked, surely it must be harder than this and after a brief moment of screeching from the machine as I had forgotten to plug-in the graphics card power supply, I was up and running installing Ubuntu (it’s free damn you and until I know it’s stable I’m not putting Windows on it!) and hooking it up to the ‘interwebs’.

Now the question arises, why if it’s so easy, would I not recommend building all the machines I own, or use at work? I’d be able to save money and tinker with hardware, what’s not to like?

imageSo easy is good right?

If pushed I could probably build a wall, but would I want it to support my house, probably not until I’d had  a lot of time building walls, maybe not even until 10000 hours to become an expert has passed. It’s the same with my new PC, would I use it to store my families photos, no I use a RAID disk set for that and the cloud (hmm I do trust them right?) as I’m unsure that the machine I threw together would be able to stay working for a long time.  I find this to be the same in designing and developing applications.

Components and development tools and platforms have come a long way since the internet fields were paved over and with that have come rapid prototyping, development and easy deployment. It’s now possible with the use of wizards and samples to throw a demo together in a very short period of time, like the construction of one imageof these modern barebones PC’s. Lots of development is easy, but because you can throw something together it does not mean it will be robust and stable, because I was able to build one machine quickly it doesn’t mean I will have the same luck again, or that my machine, which it’s mismatch components will not let me down when I need it most, like watching Snog, Marry, Avoid on the iPlayer!.

It’s the same with code developed quickly, technical debit will often lead to decisions being made that could impact the delivery of a system down the line, be those due to difficulties in refactoring or failure to run performance tests on software during development. For demo purposes technical debit might not be important, the code might not need to ever see the light of day beyond the demo, although the consequences of showing functionality that might be hard to implement reliably might live to haunt any project in the future. Lobbing technology bombs between pre-sales and professional services is always something that should be avoided, for good profitability reasons.

The Cloud Lure.

The cloud is another case of easy, it sells itself as a way to remove yourself from the burden of machines, your application can scale so long as you have the money to pay for it. Again, like the 30 minute machine build or the quick copy and paste development job, nothing is as easy as it seems and even though the imagelure is there, careful planning still needs to be done in architecting any system especially for those cloud platforms server to emulate a real system. In a world where your application isn’t tied to a specific machine you need to be careful what you can trust, are you getting data from a machine that knows about your updates, or another machine that is just handling your request at that point in time? As your application scales to multiple worker or web processes in an environment like Azure or App Engine, how do make sure everything is tied together?

Understanding how applications run in the cloud will still be needed, in order to utilise existing or still eme
rging patterns of development, such as those in the O’Reilly Cloud Application Architectures book or being developed by Microsoft on their patterns and practices site for Azure. There is no magic going on here, fundamentally thread must be mapped to processors somewhere, hardware has to do some work and then notify other machines about what has gone on. How you handle this in any deployment and its efficiency will impact the performance of any system and solution.

image Deploying applications into the cloud will be as complex as deploying applications into any set of machines, the complexity might be more software focussed and rely less on the understanding of processor specs and more on the understanding of the best practices for writing scalable applications, such as these provided by Google for App Engine.

Easy come Easy Go.

imageWhen I heard David Chappell (the IT speaker and not the comedian) say the phrase ‘there is no lock in like cloud lock in’ I realised that whilst there is much promise of Cloud computing it still needs treated like any other system. Badly written and architected solutions will not magically perform in the cloud and will always cost you more in the end than those that are optimised for performance and tested for scalability.

The cloud allows us to abstract ourselves from some aspects of deployment, but at a cost of making the software we are to deploy possibly more complex. As tooling and patterns become set we will be able to benefit from the power offered to us by a service we can build and deploy within 30 minutes, just don’t bet your mortgage that it will be up in the morning just because it’s in the cloud.

Install as I say not as I do.

betaAs we all know the pace of change in technology shows no sign of abating for good or ill. In software terms it’s a continual moving walkway of new patches, version and features, usually for the better sometime not so. I’m both lucky and cursed to be able to install a wide variety of new software where I work and at this moment installing a beta of ArcGIS 9.4 (or 10 as it will soon be) onto a new copy of Windows Server 2008 R2. I’ll soon be downloading and installing a copy of Visual Studio 2010 onto that virtual machine as well. Lucky eh? Well yes and no, lucky because I get to try out new technology as it comes out, unlucky as I’m sure there will be a whole host of frustrations about bugs and workflow changes that will eat time along the way.

This is good right?

When you see a new technology being released, usually as part of an existing product you use it can be tempting to upgrade as soon as possible. When you’ve been working on that technology for a while, at the cutting edge so to speak, you want to tell people how good it is. The problem comes around when the technology you use is not actually supported for the applications running on it. Sure it might work, even if you have to spend all night tinkering with he registry, but without support you’re on your own (or at the very best, it’s you and a forum of people!).

There is also a propagation of new and cool, as people install the newest and shiniest new software others also do, as successes increase people believe that because it works it is also support, this is definitively not the case, especially in the case of server software.image

Windows Server 2008 R2 and ArcGIS 9.3.1

I like Windows 2008 R2 in the same way as I like Windows 7, they have the same heritage, the main one being that they are not based upon the same core as Vista. Where possible I’ve upgraded all of my servers to this release, all of those servers I mean that do not run ArcGIS 9.3.1. Why if it so good, well because it’s not on the magic list. “What magic list?” I hear you ask; this one. The image below shows the list of platforms that is supported by ArcGIS 9.3.1. Look through it, notice no R2.

image

Now there are people who have no choice than to install on a new system such as R2, where purchases or machine suppliers can’t give you a copy of non-R2 or Windows 2003, in these cases, such as given here, the time for installing can be a lot greater than it should have been given a supported operating system, even if it seems that some people have an easier time installing it than others.

Now I have quite a lot of questions about which of the Microsoft operating systems are the best to install ArcGIS Server on, I used to say Windows 2003 as I felt at home in IIS6 manager and used to get lost in imagethe new IIS7 manager, but now I have my head around it I stick with recommending Windows 2008. I never recommend the use of desktop systems for anything more than brief testing (I do development against ArcGIS Server from a desktop machine, in my case Windows 7, I try and never install server software on my development machine if I can help it). Doing this gets you into good habits and doesn’t lead you to the problem of serving out large caches of data to an organisation using Windows XP’s crippled IIS5.1 (yes I have seen it happen, and no I don’t encourage de-crippling through registry hacking).image

Remember by its very name ArcGIS Server is a server not a desktop product and friends don’t let friends install servers on desktops. Until I here otherwise from places like here and here, I for one won’t be recommending R2 for ArcGIS 9.3.1 (and nor should other people be encouraging it!). If you have to, then  good luck, I’ll try not to sail in you boat.

Install as I say not as I do

imageSo to sum up, it’s easy to think that as people blog about software working together they are often only giving their opinion about how it has worked for them. They might be able to give you advice about how it might work for you, but when your production system goes down in the middle of the night because your versions were not certified I can guarantee that they probably won’t be coming round to explain the short comings to your boss.

When I say here that I’ve seen ArcGIS Server 9.3.1 running on Windows Server 2008 R2 don’t assume that it’s supported when the Support site says that 2008 R2 isn’t supported for 9.3.1, if you want to go ahead and do it, it’s a free country, but don’t expect the support department to lose sleep over your downtime.

The CleanerSure it’s nice to
try new software out once in a while and even install beta products to work out how they tick, but when money is on the line, take some advice from someone who has been there before and be conservative with your software installs, if it’s a production system then play it safe. So you don’t need to employ a ‘cleaner’ to remove the mess.

image

Anyway I’m off my installs are done and I have beta software to make work.

The web world is (mostly) flat.

It might come as a shock to many of you, but often on the internet the world is flat. Yes I know that you thought this whole debate had gone out with the ark (or actually a little later), but after years coming to imageterms of the world being a sphere, cartographers everywhere needed a method of putting that world  down onto paper.

Now this was fine for many years until 2nd May 2000 the US decided to turn off Selective Availability and whole world seemingly brought into WGS84. The difficulty here though is the fact that WGS84 and Longitude and Latitude coordinates are an approximate representation of the position in the real world, geographic coordinates, many maps paper and web based are representations of a flat world using Cartesian coordinates.  

A world of BNG

Obviously as a GIS professional you know all this, although come to think about it, a lot of people in the UK probably haven’t. Why, I hear you cry? Because the Ordnance Survey in its role of national owner of all things spatial in the UK decided that we needed a more accurate representation of the surface of the United Kingdom (fortunately we are not a large county, well unless you measure it in ego image terms, so it can work quite well for us). In doing this they created the British National Grid (BNG) which allows them to produce all of their excellent paper maps (and not paper globes which would be inconvenient for packing up in your rucksack).

Now why do I mention this, well for the first three years of my work at ESRI(UK) I touched nothing else but data in the BNG projection, it was only when I needed to implement a solution using GPS data for tacking refuse trucks that I came up against the need to occur re-projecting data between two projection systems WGS84 (for GPS) and BNG (to plot on the web map).

Whilst BNG is the preferred coordinate system for many users of OS data, amongst many web developers who have come across mapping via the various web offerings of Microsoft, Yahoo and Google (or indeed in the past with ArcWeb Services) the coordinate system they use will be Web Mercator. In fact in the UK there is a chance now that more people use the Web Mercator projection than the BNG projection, a fact that shouldn’t be lost on people. So where was I? Ah yes setting the stage for my problem.

What was I doing?

The reason why I came to this post was the fact I was implementing a little demo for a colleague to show a GeoRSS feed of Earth Quakes (what I term a ‘classic feed’ as it appears in every demo) with the Bing map service, both within the ArcGIS Silverlight API. I thought that this would be no problem as I knew of two existing ESRI demo’s with the functionality I required in. Firstly the GeoRSS layer sample from Morten and the Bing sample from the ArcGIS Silverlight concepts section.image

Now using my advanced skills in copy / paste I managed to get a map that looks like this (image right). The classic, “everything is off the equator near africa” map. Hmm, I think, when i was using the old ArcGIS Services map (as per the sample) I could get an image like this (see below).

imageAh-ha I think, something is up in the state of projections. How can I request the GeoRSS feed in another projection, in this case Web Mercator, rather than the Long/Lat WGS projection that was given as standard.

The answer to my ponderings was that GeoRSS defines the returned format as having to be WGS84 and in order to place it on top of my Bing map I would have to re-project the data myself. Fine I thought, I know how to do that, ArcGIS Server doesn’t have a Geometry Service for no reason you know.

Chatting with the Geometry Service

With the implementation of the Geometry service within ArcGIS it’s been very easy to re-project data between coordinates, you can find the documentation to do this here. This is good as it allows you to project between many different coordinate systems as it’s all server bound you don’t need to pollute your client with any algorithms.

It should be noted here, that over time there have been a number of SRS’s used to define Web Mercator. An email that I’ve seen on the boards which explains the differences between the numbers used can be found here (see the post by Melita Kennedy).

The problem you have is that it can be rather inefficient to get points into the client and then send them to another service to project, get the data back and then have to display that in the map. This solution might be the most flexible (as it could possibly handle any projection required) but it leaves a bad architectural taste in the mouth.

I put it to the back of my mind in the folder entitled, architectures to use when other methods fail and went on thinking about how it might done, back to math (+s for UK readers).

An Algorithm

I’ve always wondered what the actual algorithm used for doing the projections between WGS84 and Web Mercator, this was a demo so I didn’t have to be too careful about the accuracy. I once again used my Google brain to come up with the following link which contained a Python script which contained the algorithm a required. Which when converted to C# (not too hard) looks like so:

image

When I placed this into my GeoRssLoader.cs class file (see the GeoRSS Silverlight sample again) then I managed to get the points placed into the correct position, see map below.

imageSorted I thought, but then I got thinking. Surely this is an amazing common task that everyone is doing, GeoRSS is very popular and there are one or two ESRI developers out there (I know I have 250+ of them coming to see me present at our Developer Hub Conference next week).

If you’re interested in converting between WGS84 and OSGB36 then this link should be handy, if I get time I’ll knock up a C# class doing it and post it on this site somewhere.

The Easy Way

So I was showing my solution to another colleague of mine all impressed that I could do it using the power of math when he said that the JavaScript API had a function for it built in called esri.geometry.geographicToWebMercator(). Gah! I thought, all these JavaScript dudes have it so easy, no one ever creates one of these for us poor Silverlight chumps. Well actually they do.

Hidden away in the ESRI.ArcGIS.Client.Bing.Transform class are the following two methods:

imageBoth of which much simpler than including your own projection algorithm within your code, why reinvent the wheel after all. As we can see there are always many ways to do things with technology. It’s often the simplest one that can avoid notice when your thinking through a problem, although it should be said that there are merits to all approaches due to flexibility (REST Service), transparency (algorithm) or simplicity (existing class), the choice as they say, is yours.

Adventures through the Silverlight

imageOver the last few weeks I’ve been having a few adventures in the world of Silverlight. A bit like Alice I’ve been following white rabbits down holes and through looking glasses. What I’ve discovered is that having an IDE doesn’t always make things easier, especially when the error is occurring somewhere between the chair and the keyboard, a place which is notoriously hard to debug.

Brain don’t fail me now

One issue I have is with my brain. If you start thinking as if the development environment is going to help you, then when it doesn’t it can completely throw the processes you use to figure stuff out, if indeed you can figure it out. Strangely enough (well for me anyway) if I’m in an environment where there is little help, read ‘no intellisense’, then my brain rewires itself for self help. This can often be easier on the development as I tend to check the code more and be more robust my development methodologies (i.e. checking my environment is setup correctly for one thing). Usually I find the differences in coding for Silverlight or using Dojo to follow these patterns (I often still use VS for Dojo, but obviously get precious little help!).

With Silverlight my development is all done in Visual Studio 2008 (with some design done in Expression Blend of course). Now 2008 is quite helpful when checking the syntax of C# code, but it can come off the imagerails with the XAML syntax, so once your done in blend and are hacking around with the mark-up you can often come unstuck. Many the time I’ve spent at the top of the page.xaml tinkering with the namespaces, wondering why the code wont work when I’ve copied it straight from the ArcGIS Silverlight API samples (note: always check the breaking changes in any of the API release notes like here as the samples sometimes lag behind the releases and don’t always correspond).

Other times the IDE just goes a bit spooky on you, such as when I added a new class to the top of my page.cs file (don’t ask me why I wasn’t refactoring things into different places, I was prototyping it’s allowed). Now I figured that this wouldn’t have been a problem, I’ve often slung a class at the top of a file and had no problem (or none I can remember!), but whilst the compilation and run of the application had no problem, the actual linking up of the page.xaml and the page.cs seem to have been b0rked.

Every time I needed to add a new event handler, or to navigate to an existing handler from the XAML to the code I would get the following error:

To generate an event handler the class ‘page’ must be the first class in the file.

Now I didn’t believe what my eyes were reading at first, of course I was using Visual Studio so my brain had partially shut down, therefore using my outsourced brain (read Google) I spent a few minutes imagetrawling the interweb in the hope of finding a solution. I did find it buried deep down in the following thread, where it spelt out the reason for my ‘code fail’ to be the fact that my new class was first in the code behind file, move the class to the end and hey-presto everything was tickety-boo.

Now this serves to highlight both my initial problem of my brain expecting simple issues like this to be sorted out by the wonder that is Visual Studio 2008 and secondly highlighting the fact that whilst an order of magnitude better than Visual Studio 2005 for its integration with Silverlight, it wont be 2010 that Microsoft will have a true development environment for it. Note to self, better get installing the 2010 RC when it’s released next month to check life will be peachy.

A Dash or Two

Where Silverlight (or any RIA environment, Flash, HTML 5) really excels is in the delivery of dashboards that allow for the easy cognitive processing of information without the clutter of hardcore GIS tools that are often prevalent in some internet mapping applications.

Indeed in my opinion (not necessarily anyone else’s though) if your using Silverlight to just deliver Y.A.M.A. (Yet Another Mapping Application) then unless you really need it to be rotating on a flying cube surrounded by dancing leprechauns (however tempting it might be) then you probably need to be using a more standard HTML/JavaScript based client, such as Dojo which doesn’t have the plug-in overhead.image

The ability to present and link multiple maps together, all updating in real time, with graphs and reports can really help sell the benefit of GIS to upper management, who often don’t get excited about data formats, tile caching and the different API’s. Show them the ability to visualise all their assets and modify their assignment in real time allowing for visual modelling of costs, then you might be on to a winner, show them the common operating picture of an unfolding disaster then you almost certainly are, especially if it can save money in the long run. Sure it might use all of the cool technologies under the hood, but most people who make decisions don’t care, they want simple tools that can leverage powerful geoprocessing tasks without even noticing. With good design and the interactivity given by Silverlight (and other RIA’s) then the move of GIS from allowing people to make niche decisions to impacting throughout the business should become a whole lot easier to show and use.

imageLet me at it – hold the white rabbit

Hopefully you can see not only the wonderland that can be offered by Silverlight but also the frustration that can occur if you blindly follow white rabbits around whilst developing.  If you don’t have time to build your own Silverlight client from scratch, you can always get a head start from one of the example applications fro
m the community section of the Silverlight resource center here or some nice examples of dashboards and UI can be seen on the ESRI North East Africa site here.

Which date is it?

image

I have to admit it I have a problem with dates. I always seem to draw a blank when trying to remember them; I’ve outsourced much of my anniversary notification to outlook and my iPhone, or work with them in applications. It has been ironic this week as the last decade ends and the next decade begins that the first thing that stumps me is a little date problem.

A choice of dates

I’ve been doing some development using the Silverlight API and I came to the part of the project where I needed to filter data using a date range and display on the map and in graph (or chart) form. Now querying with dates with any system is always tricky, especially with dates. Getting the right format can be the difference between date success and date fail! The right positioning of brackets or hash (US pound) symbols has often been the bane of my life.

As this is the Silverlight API I’m backing onto the ESRI REST API for my heavy lifting.  So I need to know the right REST syntax in order to create a correct query for use both as the ‘where filter’ for the feature layer as well as the ‘where filter’ for the query task. Fortunately the REST API provides a nice query form with which you can enter these parameters to your heart’s content (sample server example link).

imageNow, at this point you might think my work here is done, that this form and the collective knowledge of the internet (by that I mean my Google search box, to which I have outsourced my ability to remember syntax), would be able to get me to my goal of a working filter and in some ways it did, but then I found that there was a choice.

Choices Choices

Looking at the help page for the REST API here we can see the query layer section which gives the imagepossible filters for the whereas ‘any legal SQL where clause operating on the fields in the layer is allowed’. Now in the project that I was working on the dates being queried were in two different layers, one which was being filtered and the one providing the data for plotting on the graph. Both the layers contained one or more fields of the type “esriFieldTypeDate”, so I thought the same query would work on both, wouldn’t it?

Well it turns out that the answer is no. I initially started with one layer and was provided with the date format of #2009/01/01 00:00:00# which worked fine for querying a single field.

In usage this would give a query in the format of

"FIELDNAME > #2009/01/01 00:00:00#";

But when I applied this to another layer where I was filtering data by two date fields I got the strange error of:

“Unable to perform query. Please check your parameters.”

Hmm I thought, but it worked on the other layer (which is synonymous to the cry ‘but it worked on my machine’) so doing a bit of googling I managed to turn up a link on the forums where someone had had a similar problem and a suggestion to use the following syntax in the where clause:

"FIELDNAME > DATE ‘2009/01/01 00:00:00’";

This worked for both sets of queries. Whether it will work for all databases or data sets will require more imagetesting, but hopefully if your using dates in your application then this way of formatting a query for dates will hopefully work. If I come up with any more definitive answers for which format to use and where or a list of any other date formats that might not work with all datasets then i will endeavour to update this post!

Pointless Predictions

image As the first post of the year you might be expecting a string of pointless predictions about cloud computing, three screens and maybe a slate. Unfortunately I’ve yet to obtain the job title of ‘Futurologist’ so I’ll leave that to those sort of people who probably should really be out there doing a real job, like policeman, soldier or plumber, hmm I think I might need to shut up now and slink off with my Solution Architect business cards before someone outs me as a fraud.

Anyway enough of my blathering and hope you have had a happy new year up till now and your date searches will forever be successful.

Why Fiddler is like a Sonic Screwdriver

The post was going to be initially titled, why Visual Studio 2008 is like a VW Bora, in reference to the clip_image001current fun I’ve been having with my current Silverlight adventures. Unfortunately that particular episode will have to wait although I might give it as my Ignite presentation at our companies away day, watch this space as I’ll only give it if we don’t get enough other presenters. Anyway I digress, or should that be ramble, the current post has been sparked by two things, watching Doctor Who episodes with my daughter and fixing a simple connectivity problem in Silverlight.

Sonic What?

imageJust in case you’re not a Doctor Who fan, and I know it’s hard to believe that there are a number of them out there, or maybe you’re not British and been brought up with good time lord for the last 40 years (give or take a few gaps) then a little explanation is in order about imagethe sonic screwdriver. The sonic screwdriver is a plot device which allows the good Doctor to fix or interrogate any system or machine to find out information about it, fix it or damage it. Usually it’s just a flick of the wrist, a press of button and a flash of light and its job done, procedure complete, information acquired. Now unfortunately in the real world we don’t have a sonic screwdriver that can do this, but we do have fiddler.

So what has this got to do with Fiddler?

imageFiddler is a free tool that allows you to view the requests made from a browser (usually IE). It can be used to interrogate the requests and responses and to visually give you an indication about what might be wrong with a particular application. Like a sonic screwdriver it can gather information just running in the background without any interaction and can tell you how web applications work at the browser level. This can be useful to see why something might be failing in an application even though you might not have access to the code or to see what a system is requesting even though the code is not you own.

This can allow you to see exactly the information a website is sending and receiving from the browser very useful in an era where the logic is maintained within the browser as JavaScript applications or Silverlight/Flash programs. Here it is often useful to see the responses back from the server or more importantly why a request is failing and often this is to do with security issues or cross site access rights.

It’s screwdriver time!

Often the time you break fiddler out to diagnose the problem is the same time that Doctor Who breaks out the screwdriver. System calls failing, applications not working properly once installed on a remote site or map services not responding. Panic starts to set in hours go by as you try and vainly debug the application then someone (usually me) pipes up ‘have you seen what you get in fiddler’?

image

Now looking into fiddler is like staring into the matrix, when you have done it enough you start being able to see the woman in the red dress even though the screen just shows the weird green dots. You can start to pick out anomalies in the calls, fiddler helps you, it highlights file not found and errors in bright red and it shows you cached items in grey, both of which can give an indication of possible problems.

Authentication issues or caching problems can both cause errors that can be hard to track down, can manifest themselves in esoteric ways and can end up burning a lot of time to diagnose. Fiddler can be your sonic screwdriver in those moments.

Divining a problem.image

Now this post could equally be titled, how many times does a man need to forget to place a crossdomain.xml file on a web server before it stops being funny, but it does serve as a good example of  where fiddler could have shown a problem well before the final issues were resolved. Crossdomain.xml and its Microsoft equivalent of clientaccesspolicy.xml are used by Flash and Silverlight (which can use both) to tell their particular run time environments that they can access services and data on a particular server. This is particularly important when a rich internet application needs to access services or files from a server that it is not served (downloaded) from initially. Without these files present at the root of the server then the request will fail often with unforeseen consequences.

I must admit after going through the story the reason why there were problems becomes obvious, but the feature layer was coming from another server which I had not used before and I had no idea that there might be an issue with it. But it serves as a reasonable simple example nonetheless.

In my particular case I was using the ESRI Silverlight API in conjunction with the most excellent (and free!) Blacklight control library. Now my application was working without issue when I decided to add another layer into my map using a feature layer. I merrily put the lines of XAML into my page to add the feature class (notice server names have been changed to spare the innocent, any maps generated are done by actors).

image Having set up the right renderers and such like I proceeded to run the application. Nothing came back; I checked the code, double checked, triple checked, but nothing. The base maps came up but no features on the top. I then proceeded to add the new map service as a normal dynamic layer as such to check there wasn’t something wrong with the FeatureLayer class. The XAML looked as such:

image Now this gave me an error in Visual Studio:

imageDoh! I think. I had made the classic schoolboy error of not adding the cross domain file onto the new server. I can see this in fiddler as such, so just waving fiddler at the requests has allowed me to divine the problem, much like (in my head anyway) the sonic screwdriver.

image Now this is a good example of being able to see the problem in fiddler even though the application wasn’t reporting any issues. Being able to diagnose problems in this way should often be the first port of call with any issue that doesn’t get picked up by Visual Studio, especially when calling remote services.

Adding the following cross domain file to the server (it’s internal so it doesn’t require too much security, but don’t use this in the wild verbatim) allowed the application to work fine with both the dynamic layer and the feature layer, problem solved, Daleks defeated (again in my head).

image

Every time I use fiddler it saves me time, the key is remembering to use it earlier in the problem solve! That way you can save time being burnt on things that have simple solutions and concentrate on the other time killers like getting agreement on the UI design, as we know everyone’s a critic!

So where do I get my screwdriver?

Whilst fiddler is written by a Microsoft employee (Eric Lawrence) it’s not actually a Microsoft product ‘per imagesey’, but it does have some good information about how to use it here (msdn) and here (msdn).

If fiddler isn’t your thing, or you just think that pointing a glowing blue light at a computer might be able to fix any communication problem even in the real world, then you can get your own from Amazon here, in fact if I could get a Wi-Fi one maybe I could hook them both up <goes off to tinker in the shed>.

The banana of doom or 404 with style.

I’ve been hyper busy in the bat cave (aka the garden office) with end of year projects, Christmas parties and general shenanigans (who sounds like an Irish military commander). It’s times like this when it’s good to see another site straining under the weight of usage and the imaginative page showing that times are imagegood, yet the server load isn’t. Many sites (the tr.im one is too the left) now put up fun pictures when there servers are taking the strain. These pictures let you down gently, but thoroughly (no slow response just no response) with a promise that coming back later will make everything alright.

We begin to rely on many web sites to provide instant responses to our every whim and when they don’t it’s somewhat of a shock. We never expect a Google application to break so when Gmail is sometimes down everyone gets all of a twitter on how it’s terrible, especially given for most people it’s a free service.image This is the same when services such as Bing go down, TechCrunch said it well ‘Its one thing when startups, like Twitter, go down, which happens all the time. It’s another when a major search portal does it’.

In fact Twitter has one of the most recognisable site unavailable images on the interweb that of the fail whale. This charming graphic, which has often been seen by those people twittering via the website as the popularity of the site has grown, even has its own fan club (‘the fail whale fan club’) where you can buy mugs and t-shirts with the little whale plastered all over it. It’s amazing what a bit of imagination can do to endear people to what actually is a site failure (for whatever reason).

Under the hood

Now all these nice pages are doing are hiding the HTTP error codes that we have all seen emitted by our favourite application, that when not altered from its natural state tends to make their way to our web browsers which then renders it with the minimum amount of eye candy possible. Whilst it’s not only a sign of a poorly developed application to have raw error messages delivered back to the browser, it’s also bad to allow raw HTTP errors to fly around without even a little bit of window dressing.

This is a two stage process:

imageFirstly any web application or end point should emit the right codes when returning information, such as 200 for success and 404 if something isn’t found. REST services need to use HTTP codes appropriately to allow for the proper caching of information, through the use of conditional get, and the identification of bad requests for information. The RESTful web services bible gives an appendix dedicated to understanding which of the 41 HTTP codes that are needed (go on it’s an interesting read, honest guv!).

Secondly the web server needs to be configured to use a nicely formatted and informative imagepage to return so as the user is nicely calmed and reassured as their application is going down in flames. In IIS you can configure the error pages that are sent by a whole server or an individual site through the admin pages and then craft the appropriate response that you wish your users to see.

There for by the grace..

Whilst it’s easy to mock a site that’s having scalability woes especially if it’s run by one of the major internet companies (the Bing error page wasn’t too comforting at first) it only takes a simple post by Slashdot to bring all but the best designed or resourced sites to their knees. Hopefully now if it’s one of your sites at least you’ll look good on your knees and your customers will remain calm. If your stuck for ideas Smashing Magazine has a list of cool 404 error pages that might give you some, even some off the wall ones such as my favourite:

image