When is a pilot not a pilot?

4111953123_d317182ddd_o Recently we’ve been running some software pilots in our unit. Although I am sure that some folk in our Uni think that we are unnecessarily risk averse with regard to core systems such as Moodle (“it’s just  a line of code” is a running in joke we have about folk who do not understand the complexity of the systems we run, and the possibility of them breaking in unpredicatable ways if we try to hack them), we are actually keen to run pilots of potential software as and when the opportunity arises. These are usually bits of 3rd party software, so we are seeing if they are suitable for use within our Uni – we do not have the capacity to rewrite other folks’ products.

But we’ve noticed a funny thing. When we run a pilot we assume that our aim is to evaluate a bit of software and produce a report weighing up the pros and cons prior to making a recommendation about its suitability, sustainability, robustness, etc. – so it is by no means a done deal. Here’s a quote from Prof Wiki to summarise what we think that we are doing:

Often in engineering applications, pilot experiments are used to sell a product and provide quantitative proof that the system has potential to succeed on a full-scale basis. Pilot experiments are also used to reduce cost, as they are less expensive than full experiments. If there is not enough reason to provide full scale applications, pilot studies can generally provide this proof.

Ostriches-head-in-sand2A consequence of the above, then, is that a pilot might fail, or might have to be shelved for the time being because the software is not quite up to scratch yet – we do try not to say no, but often we have to say “not yet”. So we think that what we are doing is working with a group of folk who are willing to take  a risk and help us decide whether a system is viable. However, we’ve recently come across some people who seem to believe something different. When these folk are faced with any problems with the software they are helping us test they do not see this as a barrier to rolling out the software at an institutional level – instead they assume that is is we that are at fault and not the software, and that if they shout loud enough magic will happen.  Like the proverbial ostrich they stick their heads in the sand and refuse to believe that some bits of software are just not suitable (because they are too broken, too primitive, whatever the reason) and that consequently not every pilot will result in a full scale roll out of software.

I am not sure how we can educate such people.

Image credits:

Student pilot Jean McRae of Homosassa: Tallahassee, Florida. flickr photo by State Library and Archives of Florida http://flickr.com/photos/floridamemory/4111953123 shared with no copyright restriction (Flickr Commons)

Ostriches by Fwaaldijk [CC0], via Wikimedia Commons

This entry was posted in Online learning, Technology, University and tagged , , , . Bookmark the permalink.

4 Responses to When is a pilot not a pilot?

  1. fmindlin says:

    I have a different interpretation to your wiki quote. The words “sell” and “proof” both point to the self-contradiction in trying to persuade people of your “objective” stance–we just want to see if this works, and find out how it doesn’t, or if it doesn’t–and the obvious stance of an advocacy for a sale in the fact that you’ve put it out in the first place. If no one on the creative team is actually interested in getting it used, why was it put out? If it doesn’t work, I the tester have just wasted my time for no apparent good, except perhaps yours, but you’ve not committed to making a real fix, if it’s too hard to fix, so we’re all just wasting our time….

    • NomadWarMachine says:

      Ah, but these are third party systems, Fred. We don’t write them, but we do keep an eye on what is around and see if it is fit for what our folk are looking for.

  2. scottx5 says:

    People could try and see it as an opportunity to try something and have a chance for input. But of course if their time is too precious to be wasted on critiquing and intelligent speculation then perhaps they could put their head back up their arse and wait for someone else to TELL them to shut-up and just do it. Among the unhelpful, “selective autonomy” I think its called.

    Have you read “Dreaming in Code” by Scott Rosenberg. About building software and it never being bug free. If a person has never attempted creating something from scratch or even accepting something as not ready but worth looking at, they will never know how many gaps there are in their own seamless logic. How many untried things that may undo our best ideas are left incomplete by building little connection conveniences across them? What do they look like when written down? Obvious, careless faults that prove the futility of perfect design. Oh golly!

    My story on Beta testing: few years ago I spent almost a year in a group piloting a series of online classes leading to a certificate in “Hope Studies.” After the year’s work we were offered the certificate if we traveled to the city and paid for a week long completion course. I couldn’t afford the trip, accommodation or the tuition (and was too sick anyway). By odd chance, the person who created the Hope course became my new boss and it was great to work with her. That, I believe, was why they chose her to fire me a few weeks after she arrived. If you calculate it all out, being foolish really does pay less but it’s a lot more fun.

    When my wife, Leslie, started work at The College of Abysmal Outcomes they were just introducing online courses. No one was happy about having to change and they took it out not on the Magnificent Inflatable Toad who directed them to change or be fired, but on Leslie who did all she could to ease the transition. So after years of refusing to change the staff at the college have driven away almost everyone who registers a viable pulse and who cares about the rest?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.