A rule of thumb.

There has been a long standing rule of thumb when deciding how many instances to give a map service to give optimal performance. Finding this information has sometimes been hard although surprisingly when asked for this information the other day, and failing to find it, I decided to see if it was on the new resource centre. Fortunately the is a page on services performance.image

http://resources.esri.com/enterprisegis/index.cfm?fa=performance.app.services

Here it not only gives the ‘rule of thumb’ for the number of instances for a map service (2.5 * #CPUs) but also a whole series of information about the relative performance of each service type and the factors that will specifically effect the performance of any map service.

With 9.3.1 it becomes a bit easier to automatically determine why a service might be slow either through using the new .MSD service type and the map services publishing toolbar or using the old school mxdperfstat script.

The Perils of Synthetic Tools

Of course any synthetic tool will only give you a level of guidance, any proof would have be got through actually performance testing any solution during development, preferably as early as possible. Such tests and examples are given in two recent ESRI whitepapers, the High-Capacity Map Services: A Use Case with CORINE Land-Cover Data and Best Practices for Creating an ArcGIS Server Web Mapping Application for Municipal/Local Government.

Both documents cover the optimum use of data and their effect on how an application performs. The former in terms of a high scalability site but with information that can be applied to all sites, especially in terms of the recommendations about using file geodatabases for large performance gains. The latter document is important as it shows how a workflow can be mapped to making choices in implementation of an ArcGIS Server architecture, map and geoprocessing services for a medium sized authority.

A Good Guide

Guidance like that available in these two documents and on the Enterprise Resource Centre in general, whilst not indicative of how every site will perform, gives a good grounding in the pitfalls to avoid when translating user requirements to any specific solution architecture. With any performance and architecture though its important that you think of not only the performance now but also the performance implications of any site growing over time. Without any analysis of the capacity requirements of your site, you really don’t know how long your current performance will be applicable. It should be remembered though as is said so eloquently on this Ted Dziuba’s site ‘unless you know what you need to scale to, you can’t even begin to talk about scalability.’

Understanding your current performance requirements and your short to medium term load requirements, and potential spike points, will mean that you can concentrate on worrying about the right parts of your application in terms of performance and stop worrying about those areas that might never become a problem. The book ‘The Art of Capacity Planning’ gives a good overview about how to tackle monitoring your sites performance over time, what to worry about, and when.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>