As I wait for news on when flights will resume I contemplate the ocean between here and home and how it is the perfect metaphor for the concept of "boiling the ocean" for data driven decision making in online learning.
Though I have enjoyed being in Bucharest, and attending the eLSE conference I know it's not where I want to be at this time. Across an ocean lies home; a place where things may not be perfect, but it is a place where I can function efficiently (though I have friends who may argue about the degree of my functionality). However, between here and home lies a vast expanse of water that, from 38,000 feet, takes on the appearance of a never-ending opaque gulf. Fortunately, the giant aluminum tube that transport me across this chasm is a means by which passage from my current state to my desired state is assured with a very high degree of confidence.
For eLearning there are no giant aluminum tubes that will rapidly transport us to a highly desirable state. Though eTime can take on the feeling of dog years, we are still involved in an enterprise that is in it's infancy. For institutions and practitioners there are some rudimentary maps, but they are much like those produced by 16th Century cartographers. There is a fairly well defined outline of Europe and a very hazy New World coastline, with a vast expanse between that is overlaid with the words, "Here there be monsters." For those in eLearning the maps are much the same; our knowledge of the current landscape is fairly well defined and we have some vague idea of where we want to go, but in between lies a very dark and mysterious expanse.
Don't get me wrong, there are some great explorers out there who have started to hoist sails and explore hostile waters. In fact, I have had the pleasure of being a passenger on some of these ships, most notably Sloan-C and WCET. While these vessels have highly skilled pilots and navigators, it is important to note that they are limited by the crude instruments and incomplete maps they have at their disposal. For eLearning the invention of an airplane is somewhere in the impenetrable mists of time. For the foreseeable future we are destined to forge ahead aboard tiny ships. However, the journey need not be perilous or fraught with unknowns. We have the ability to boil the ocean and expose the chasms and monsters that lie below if we have the will.
In 1993, Peter Drucker issued the following challenge:
“Indeed, no other institution faces challenges as radical as those that will transform the school. The greatest change, and the one we are least prepared for, is that the school will have to commit itself to results. The school will finally become accountable."
As we survey the eLearning landscape the prophetic nature of Drucker's words become apparent.
Though the recent DOE metastudy decisively concludes that online learning can be more effective than face-to-face learning there remain a multitude of issues plaguing the industry, ranging from fraud in the financial aid process to low completion rates, as well as the ubiquitous criticisms related to low quality. Proponents of eLearning counter that there is a wealth of research supporting the efficacy of the medium. Granted there is a lot of very fine work that has been produced over the years, however, now is the time that we must ask if the existing data is truly adequate.
There are a lot of critical assumptions about effectiveness in eLearning that are based on studies with an n of less than 1,000 and a multitude of qualitative studies with n's that are a fraction of that. A few years ago, at on online learning conference, I saw the audience collectively gasp when a study with an n of 22,000 was presented. Since that time I personally have worked with data sets in the 100,000 range. Similar data initiatives are also being generated by Rio Salada, Capella and Kaplan, to name a few. However, in the greater scheme of things these are still relatively small studies that focus on very well defined populations. If these same approaches were applied to pharmaceutical testing, mortality rates would skyrocket. In the financial sector, reliance on similar sampling techniques would all but guarantee that our credit card information would regularly fall into the hands of nefarious individuals in the third world. And, last but not least, consider if Amazon used equivocal methods, the "books you might like" advisories would have about a 10% rate of accuracy. To revert back to the seafarer metaphor, we are still only mapping the continental shelf of our homeland and not venturing out into the "deep blue."
Despite the obvious need for using empirically driven measures to explore the eLeanring landscape and create informed solutions, there is still a strong reluctance to do so in the academic community. Sheltered by the traditions of Bologna and Oxford, academia refuses to view itself as an enterprise, but rather clings on the idea of the Professoriate being the center of wisdom. Thus, the pathway to change is not guided by fact, but the collective wisdom of those of us with letters denoting some form of doctorate following our name. The by-product of this process is that we often create new monsters instead of exposing the real ones that lie beneath the waves. As the old adage goes, "A camel is a horse designed by a faculty committee." However, there are signs of a seismic shift starting to occur in eLearning.
Earlier this month I was invited to a Quality in Online Learning Summit, at the Gates Foundation, where many potential avenues for improving the quality of online education for underserved populations were discussed. The focus group I was part of discussed the concept of analyzing extremely large, multi-institutional data sets to inform our current state and drive program / course development. The recent RFP from the Lumina Foundation highlights the need for quantitative measures to inform eLearning practices. In a recent NYT article, Victor Vuchic, the Hewlett program officer responsible for open education at the Hewlett Foundation said, “We’d like to see data being gathered, and see these materials being improved, and we’d like to see new models of learning.” At the conference I just attended, in Bucharest, there was a great deal of discussion about how basic and advanced eLearning courses can be improved to insure quality education experiences for those entering the work force. From Washington, there is a persistent drum-beat calling for accountability and proof of learning effectiveness for online programs. Finally, there is the shocking dismantling of the UT Telecampus, based on assumptions that could neither be empirically proven or disproven.
From all directions there is a sense of urgency revolving around the realization that we need to know more, a lot more, if we want to catalyze the pace and accuracy of the eLearning enterprise. We have at our disposal the ability to modify well forged tools to produce accurate compasses and maps for practitioners. With respect to our vendor partners, the sharing of data would help build ever sturdier ships upon which our voyages could be launched. Below the eLearning ocean there dwell some ugly creatures and perilous landscapes, however, understanding their nature is the way to create a clear path to our goals. The question that remains, is "Do we have the will to boil the data ocean or are we content sailing about the safe shoals of our current coastline?"