MapIt system requirements and the Developer Hub.

image

I had a query today about the system requirements of MapIt. For those of you who don’t know what MapIt is yet I point towards the link above, it’s a great product for a certain market, especially those Microsoft partners who want to make use of their data within SQL Server 2008 with all of the spatial bells and whistles available. Jithen gives a good overview of what you get out of the box here including the system requirements on the ESRI wiki site.

Many people will hopefully be buying or evaluating MapIt over the next few months and should keep an eye on a number of the requirements. Specifically:

  • It requires IIS 6 or IIS7 so no XP for the server.
  • It runs on 32 and 64 but Vista, Windows Server 2003 and 2008, which is obvious given the above requirements but check the SP’s supported for each.
  • It requires Visual Studio 2008 SP1, Silverlight 2 or 3 and .NET 3.5 SP1. Mixing Silverlight versions, especially toolkits, on a single machine can be a world of hurt so tread carefully. Development can be done on Windows XP of course as long as the requirements for Visual Studio 2008 are met.
  • It supports all version of SQL Server Express 2008, including Express!

    See and hear it in action!

    Coupling MapIT with the new Microsoft WebsiteSpark initiative would allow companies of all sizes to build one powerful internet mapping solution and when you’re ready you can move up to a full ArcGIS Server package for a more comprehensive GI services and data management capability.

image For people in the UK were holding a seminar with Microsoft on the 6th October about ESRI and Microsoft integration, MapIt and use of Bing maps. Go over to the ESRI(UK) site here if you’re interested to register. I’ll be speaking about MapIt and integrating it with Enterprise data and Bing maps there, so hope you can come along.

For those people who can’t make it there are some excellent webinars and links given on the new ESRI(UK) Developer Hub. Especially Sarah and Mark on the ArcGIS WebAPI’s what a team!

image

The mysteries of SEO.

As you can see I had this idea of hosting my own blog. In case you wondered your here reading it. Now Iimage thought that would be a simple thing to do just put a site onto the internet and eventually the magic that is the Google-bot or the Bing-bot (do we call it that?) would one day swoop down and make me part of the internet (I firmly believe, albeit slightly misguided that if your on a public site and not in the index then your not actually on the internet). Now I suppose before I go into my failings as a web developer I feel  that I need to justify myself.

An explanation.

I’m a fairly seasoned web developer (read old), I understand the intricacies of JavaScript, Flash, Silverlight, you name it I can probably work out how to code it eventually (come on it’s not hard, well except WCF but I’ll post more on that another time). But as you see most of my time is developing sites and solutions for enterprise customers. They don’t need sites that are indexed or have a high pagerank, use adwords or meta-tags. I now realise that some of the stuff I have done would have benefited from some or all of  that!. That getting your site indexed and high up the page for certain search terms is a whole industry in itself and that there are some helpful sites that can explain the process.

How did I get here?

With my host it was fairly easy to get my blog site up and running, it’s a classis ASP multi-tenant provider with a shared server offering, with all the benefits (mostly cost) and shortcomings (mostly performance). It has an automatic deployment of WordPress of a certain patch level, 2.7.1 and then it’s simplicity to use the internal upgrade feature of WP to make sure the software is up to date. The impressive nature of WordPress for deployment and integration of new themes and components should be a model for all sorts of web based applications, anyway I digress. Once up and running, I thought that was it, all I needed to get done was to add some useful posts and my site would become one with the collective and all I would do would be to use Google Analytics to work out how many people were coming to the site and where.image

Now this is where it became interesting. Google Analytics is an excellent tool for finding out whose accessing your site and from where. It even shows a map of the locations around the world or an individual country of where the people are coming from. The question is unless you have a lot of friends and colleagues who might want to read (and be interested in what your writing) then you need to get it out to a wider audience. Google has another set of tools to do this, it is the method of expediting the crawling of your site by Google, sort of telling the search engine that your ready for your close-up.

Where’s your Sitemap?

Now before you think that registering your site with Google, or Bing for that matter as they have a basic set of equivalent tools, will open the floodgates of people to be exposed to your pearls of wisdom you should think again. Getting into the Google Index isn’t that hard, getting listed high up in the list for a particular search, or even on the magical first page, needs a lot of people to link to your site and add trackbacks and comments to actually show Google that your site has value to other people over just being a repository of drivel, I leave you to decide what this is. It is at this point the science of getting into search turns into the art of Search Engine Optimisation or SEO. Wikipedia defines SEO as the following:

Search engine optimization (SEO) is the process of improving the volume or quality of traffic to a web site from search engines via “natural” or un-paid (“organic” or “algorithmic”) search results.

SEO is the process of getting your site improved within the search indexed so that for the right keywords that your site ranks in the top two pages, at least. It relies on a number of factors but starts basically with two files, sitemap.xml and robots.txt. The first file tells the search engine which pages to index specifically and the second file tells search engines what not to crawl through. The difficulties is that as image time moves on there are no hard and fast rules which determine what a particular bot finds important. As ‘blackhat’ techniques have been used to play the system to improve page rankings, so have the algorithms used changed to sniff out people not playing fair. This means that whilst the sitemap is important for notifying people about what to index, the robots file is equally important about telling them what not to index so as to possibly be blacklisted by the crawler for trying to play the system.image

Thus you have a never-ending merry-go-round of SEO optimisations becoming redundant and new techniques being developed to try and keep people sites near the top. This doesn’t even take into consideration your actual location and how the search engine knows this and will send you to your ‘local sites’ based upon where your coming from. This sort of geo-targeting isn’t new but is increasingly being used to target people in all sort of sites, from search to twitter (see Trendsmap for an excellent example).

Now in most of my web development life SEO isn’t a term that needs to rank highly in a solution to maintain pipes or manage gazetteer data, but as more and more sites are exposed to the web and more companies see value in exposing their data to be used by everyone, then the nature of such mechanism as SEO and GIS will often need to be used together. This is especially important in the sharing of spatial metadata in a form that can be indexed easily by search engines and therefore more widely disseminated.

SEO and ArcGIS Server

Now I was wondering how we can both use SEO for promoting and sharing information from an ArcGIS server implementation and also how this might be used to protect services from being indexed when you don’t want them too. The REST API has been around for a while now you can see how Google indexes ArcGIS sites ‘on the web’ by doing a search on “ArcGIS/rest/services”. You could block this from an index by using a standard pattern within your robots exclusion file such as:

Disallow: /ArcGIS/rest/services/mymapservice

This might be supplemented by more complicated patterns that use wildcards although it should be understood that the mileage of this might vary according to the bot doing the crawl as it deviates from the standard.

Of course it is important to understand, only ‘good’ crawlers obey the robots.txt file, ‘bad’ robots will crawl anything if you put data onto the internet, you have to assume it’s going to be used. It’s therefore important that if applications and data need to be secure from unauthorized usage that you use the appropriate security measure for your application, more details about this can be obtained from the ESRI documentation here [Working with secure ArcGIS services].

Services are always only one part of any application, it’s also important to make your user interface as SEO friendly as possible with a mapping interface. This imagepost (from SEOmoz.org) gives a good overview about how you can provide spatial information that can be  reported in a format that allows indexing. A lot of it is similar to providing accessible information, as a bot often ‘sees’ a web page like a screen reader, ignoring the image based map information and concentrating on this links, the url’s and the text of the application, creating an accessible version of the site often creates a SEO and indexing friendly version of the site.

As GIS and spatial systems find increasingly find them used for both commercial and public services, getting them indexed is only going to become more important, how that is done is still much of an art.

Know Thy Browser!

When developing any web based application is pays to know exactly what your browser can do. If imageyour developing an internal application you can make sure you maximise the website to what the browser can  support, this is Nirvana (provided the internal browser isn’t IE6, which in this case is usually is) just make sure your aware of any upgrades to the browser on the network and test early!

If your working in the more common heterogeneous environment that is more common on public facing sites then understanding which browsers your going to have to support for your application is fundamental as this will not only guide what you are going to test but also guide what technologies your able to use.

What do you need to support?

Often you start a project with some lofty goal of support all known browser type, or you’ve worked on a project before and stipulate that you’ll only support one or two of the main ones. Then you start getting calls about problems with your site, some oversight of a major browser or a newly released browser imageversion has caused problems with your site, making it look uglier at best or stop working at worst. How do you know what to support or who is using your site? Often a lot if this information can be captured by analytics software such as Google Analytics. This can then monitor who accesses your site and determine the browser they are using by capturing their information. For many clients this information already exists for the main website and can be provided as part of the requirements of any project. For this limited site the output is as follows for a certain period.image Now monitoring this list for changes and increases in new browsers types or updated types of existing browsers should allow you to plan your ongoing testing strategy and make any changes to the site before they become more commonplace. This is preferable to when you see betas of new browsers start accessing your site obviously. So now you know what you need to support how to find out information about what these browser can do (without endless testing of your own).

If you have endless time there is a full list of many browsers is available from Wikipedia.

Help at hand, ironically in a browser.

With a variety of browsers out there it helps to have a set of tools to tell you what is or isn’t supported and how they might perform. There are a number of tools to do this.

Browserscope provides a collaborative list of what a particular browser supports by profiling your browser and adding it to the list of browser that are ‘out in the wild’. If you see from your analytics that a new browser type is often accessing your site then you can check Browserscope for what it supports, which is very useful. More information can be got at Steve Souders blog here.

We all know about CSS and JavaScript issues and browsers. A number of websites have been dedicated QuirksModeto this for years, such as Quirksmode with it’s browser compatibility tables (now with updated HTML 5 information yay!). One area which is popular with a certain section of online advertisers but is often not considered by many developers is the CSS support of email clients.

It is often the case with spatial analysis, either using a map or not that the output might be emailed to imagepeople who fall within a certain location and the application your developing might need to send a html formatted email to customers. Making sure you use the right sort of formatting in this case is as important as support within a browser. This list, from CampaignMonitor of CSS support in email clients can give a quick overview of what you need to be supporting within any particular email system. Obviously testing will always be the final arbiter.

ArcGIS Browser support

Whilst Flex (Flash) and Silverlight clients are protected by their individual hosting components, the WebADF and JavaScript API clients both run within the browser. As with any library that is itself built on other libraries you need to be careful what the whole stack supports, especially when you use non-core imagecomponents such as though provided by DojoX. There are a number of resources that should be checked once you know the level of browser you should support with any website.

ArcGIS Browser System Requirements : note this is for the 9.3.1 library, it should be seen that there are compatibility issues with the newest version of IE, IE8. Checking how a site works in the compatibility mode of IE7 might be able to make any site work in that browser, this can be forced using the IE=EmulateIE7 meta tag you can find more details and caveats at the IE blog here.

Dojo Browser Support : This is important if your aiming at rolling out additional components (Dijits) with the Dojo 1.3 is supported fully in only certain browsers and should be checked before any project is started.

ASP.Net AJAX Browser Support : Again this is useful to know if your deploying components with the WebADF. It seems the site suggests that you don’t use Linux when browsing ASP.NET AJAX based sites, although I would assume they would work fine in Firefox on Ubuntu or other systems.

Support Everyone (maybe) with Graceful Degradation

One of the issues when using modern technologies, especially on public sites that need to provide data to all users regardless of browser being used. This can often be achieved by providing a view of the data on a page that has only basic content and search facilities. For many components, such as those provided by Dojo or ASP.NET then the degradation is often provided within the component, sites should be tested with a variety of browsers, information about how and what each browser supports can be then given to the users of the site, rather like Yahoo does with their graduated browser support and rating for their YUI (Yahoo! User Interface) library.

Browser support has always been a complicated issue, over ten years ago I wrote a column about the the same compatibility issues within browsers. Today’s libraries handle a lot of the issues about browser support both at the client level and with communication with the server, but they are not perfect and it’s only through understanding the requirements of the users of the system and the capabilities of the minimum browser to be supported will the right choices be made.

Dojo – Well done you have found another piece of the hidden documentation!

imageI’ll start this post by saying I like Dojo. It makes developing JavaScript a lot easier than hacking the code yourself especially when trying to support multiple browsers. As with everything though it’s not perfect.  Whilst getting hold of the code is as simple as linking to the AOL hosted site, getting hold of the information about how to use the code makes you feel like your playing some sort of Japanese console adventure game, probably on the Wii, hunting the internet for snippets of code, working examples or even fully fledge sites, that might give you a small insight into how the library is best used.

Power imageUp!

Every time you find a bit of this information, no matter how small it can make all the difference about how productive you can be with the code, it’s like you have gained a power up. In fact often, maybe only in my head, I can hear one of those tinkling game sounds notifying me of some additional ability.

If we compare Dojo to the other libraries that can be used for working with ArcGIS we can see why the benefit of having a massive set of technical authors and production values can really streamline the help for a product. Whilst both Silverlight and Flex have had a number of versions, there have been no massive breaking changes like there has in Dojo. The relevant documentation is maintained on a single site, backed by a large company that provides an integrated experience including videos and professional tutorials and encourages staff to blog about the best practices of the use of the software.

imageDude where’s my manual?

Dojo on the other hand with its open source nature is always going to struggle to compete, unlike another JavaScript library such as the yahoo supported YUI. Which has a series of patterns to be used, a  comprehensive help system and someone (maybe more!), I assume, that is paid to maintain and promote this information.

Whilst products like Aptana can really help the developers productivity, that lack of a tightly integrated IDE like Flexbuilder and Visual Studio means that actually working out how something is used by looking at the objects and methods becomes a more complicated process. Couple that with a number of versions released in a relatively short time, including some major breaking changes between versions 0.4 and 0.9 and the search for help can be fraught with danger.

Just because a site says something might work in Dojo always be careful to read the label and check the version that it says it is based upon!

Doctor heal thy self!image

I know that the answer to this question is to contribute to the documentation myself, to make it better for   other people. Maybe I will if I ever get time, in the mean time I will resume my search for the mysterious documentation, if you don’t hear from me again I hope these notes will save to remind others of the perils that are fraught with the search for the Dojo doc. I will also leave clues about how you can find your way to the promised land of Dojo goodness.

Why are you still here? Go and read the doc and good luck with the power-ups!