Recently we’ve been running some software pilots in our unit. Although I am sure that some folk in our Uni think that we are unnecessarily risk averse with regard to core systems such as Moodle (“it’s just a line of code” is a running in joke we have about folk who do not understand the complexity of the systems we run, and the possibility of them breaking in unpredicatable ways if we try to hack them), we are actually keen to run pilots of potential software as and when the opportunity arises. These are usually bits of 3rd party software, so we are seeing if they are suitable for use within our Uni – we do not have the capacity to rewrite other folks’ products.
But we’ve noticed a funny thing. When we run a pilot we assume that our aim is to evaluate a bit of software and produce a report weighing up the pros and cons prior to making a recommendation about its suitability, sustainability, robustness, etc. – so it is by no means a done deal. Here’s a quote from Prof Wiki to summarise what we think that we are doing:
Often in engineering applications, pilot experiments are used to sell a product and provide quantitative proof that the system has potential to succeed on a full-scale basis. Pilot experiments are also used to reduce cost, as they are less expensive than full experiments. If there is not enough reason to provide full scale applications, pilot studies can generally provide this proof.
A consequence of the above, then, is that a pilot might fail, or might have to be shelved for the time being because the software is not quite up to scratch yet – we do try not to say no, but often we have to say “not yet”. So we think that what we are doing is working with a group of folk who are willing to take a risk and help us decide whether a system is viable. However, we’ve recently come across some people who seem to believe something different. When these folk are faced with any problems with the software they are helping us test they do not see this as a barrier to rolling out the software at an institutional level – instead they assume that is is we that are at fault and not the software, and that if they shout loud enough magic will happen. Like the proverbial ostrich they stick their heads in the sand and refuse to believe that some bits of software are just not suitable (because they are too broken, too primitive, whatever the reason) and that consequently not every pilot will result in a full scale roll out of software.
I am not sure how we can educate such people.
Student pilot Jean McRae of Homosassa: Tallahassee, Florida. flickr photo by State Library and Archives of Florida http://flickr.com/photos/floridamemory/4111953123 shared with no copyright restriction (Flickr Commons)
Ostriches by Fwaaldijk [CC0], via Wikimedia Commons