J. C. R. Licklider – “Man-Computer Symbiosis”

One of the most difficult things about studying history is recreating the social, intellectual, technological, and physical environment of your subject. Mostly this entails putting your own assumptions aside, which means understanding the assumptions that you make about life. This is not always easy. Some assumptions we make about life are so ingrained that we cease to notice them anymore, let alone recognize their absence in the past. Two of my favorite examples of this are access to medical care and access to communications infrastructure. People in 1800, even if they lived in a relatively affluent area, had atrocious medical care by our standards, which translated into much lower life expectancies. A letter – which was the dominant mode of long-distance communication – took anywhere from two weeks to several months to cross any significant distance (like the Atlantic Ocean). We take access to decent medical care for granted, as we do communications infrastructures like telephones (and, of course, the internet).

The reason I bring all of this up in a post about an article from the mid-20th century U.S. is relatively simple: I found it hard to understand exactly what computers of Licklider’s time actually looked like and how they functioned. One has the “room-size” computer in mind, the types that were programmed by punch card and were actually susceptible to physical insects (bugs) clogging up vacuum tubes in their inner workings. But beyond this stereotyped understanding of computing, it is very, very difficult to break out of our current understanding of what computers are and how we use them. I’m typing this on a machine that fits in my lap, and yet has many, many, many times the processing power that the room-size computers of Licklider’s day had. I have instant access to a large portion of the world’s information and can use this machine in an almost infinite variety of ways, all without needing to know a single line of computer code. With some coding skills, this number does rise very close to infinity. I can create amazing representations of complex data structures through free programs like R and Gapminder, and do all of this with data sets freely available from governmental and non-governmental agencies across the globe. Not all of these assumptions about what a computer can do immediately recede when I try to put myself in the mind-space of Licklider, like the idea that I would need to articulate exactly what I wanted to compute before I did it, sit down with a programmer to figure out how to compute it, and then wait for the processing time on the machine itself to become available. When we consider how frankly archaic this sounds to us in 2014, I think we start to get a better sense of how far we’ve come. Consider this quote from him about computer storage:

The first thing to face is that we shall not store all the technical and scientific papers in computer memory. We may store the parts that can be summarized most succinctly-the quantitative parts and the reference citations-but not the whole. Books are among the most beautifully engineered, and human-engineered, components in existence, and they will continue to be functionally important within the context of man-computer symbiosis.

We have the capacity to store all of the world’s information in computer memory. What I’m trying to say is that at least in some senses I think we have achieved Licklider’s vision, or at least are approaching it in meaningful ways. Consider his statement about the work that he is actually doing:

The main suggestion conveyed by the findings just described is that the operations that fill most of the time allegedly devoted to technical thinking are operations that can be performed more effectively by machines than by men. Severe problems are posed by the fact that these operations have to be performed upon diverse variables and in unforeseen and continually changing sequences. If those problems can be solved in such a way as to create a symbiotic relation between a man and a fast information-retrieval and data-processing machine, however, it seems evident that the cooperative interaction would greatly improve the thinking process.

We process weather models, simulations of space shuttle reentry, social network analysis, and any number of things with rapidly changing variables today. We use our computers to help us think through problems, to rapid-prototype, if you will, and get to solutions far faster and with much less effort than in Licklider’s day. Perhaps even more to his point, advances in natural language processing, semantic linking, and machine learning are beginning to let computers “think” more like human beings and reckon with new information by themselves, rather than through the mediation of a human being. Considered from this angle, we live in exciting times indeed.

NMC Keynote: Jason Ohler on “Trends that Bend”

Dr. Jason Ohler gave the Wednesday morning keynote, in which he identified five technology trends of the near future. He believes that these technologies will help all of us cope with the information flood that only increases as we get further and further into living lives that blend the virtual and the real. To give an idea of how much information we now receive and can’t process, Dr. Ohler estimated that it would take between thirty and forty days for him to process all of the information that he receives in one twenty-four hour span. He went through each of the five trends and discussed how each of them might apply to education. I’ll discuss the trends first and then discuss all of the education-related impacts at the end. Here are the trends, in order:

Trend 1: Big Data

Google collects 24 petabytes of data every day. We really don’t have any way to keep ourselves away from the big data juggernaut, but this has both good and bad elements. Text analyzers and predictive data analytics are rapidly improving and can help us make sense of this data. But we need to think about the kinds of data that we’re collecting, especially in educational contexts, and make sure that we are clearly articulating the goals of big data and shaping its future course.

Trend 2: Immersion

Augmented reality is becoming mainstream. Virtual worlds and the real world are becoming increasingly blurry and interconnected. Immersion is the antidote to spam: when two pieces of data are meaningfully connected – like location and reviews, let’s say – then relevance is increasingly assured. This contextualization has demonstrable impacts on our ability to sort through information.

Trend 3: The Semantic Web

It used to be the case that links were page by page, that they linked one large container for or chunk of data to another container or chunk. This is changing, such that very specific pieces of data are being linked to one another to create relationships that augment intelligence. This is increasing alongside another web technology, the internet of things, that will see an even further leap in the connections between machines, data, and people. It is important to realize that now that camera that keeps an eye on the subway station is not just a camera – it is also an application platform that can run apps. These innovations will continue to make the over-abundance of information connected and intelligible, but obviously comes with other risks as well.

Trend 4: Extreme BYOD

Bring-Your-Own-Device has been around for a while, but in the new version expressed here by Dr. Ohler, the customization and personalization of these technologies will continue to test the flexibility of our IT infrastructures. This increased personalization will continue to be a boon to workers, who will increasingly be able to customize their devices to work exactly how they would like to work, allowing some additional filtering of the information flood.

Trend 5: Transmedia

Transmedia storytelling is huge everywhere except education. Rather than telling a linear story through text or bullet points, transmedia enables multiple media types and different transmission methods to coexist to tell a single story. We need to be able to communicate in new ways, especially with visual media, and bridge the gap between creative thinking and critical thinking – leading to Dr. Ohler’s neologism “creatical thinking.”

Educational Impacts

Taking all of this together, it seems clear that these trends are helping to make the amazing amount of information that we encounter more manageable. But what do these technologies mean for education? The big takeaway is that we need to be teaching students how to become active and responsible digital citizens. This can’t be confined to the closed environments of the “school web” either – they must be robust experiences with open technologies that actually model the kinds of meaning-making that students will continue to engage in throughout their lives. Creatical thinking, transmedia, augmented reality, customized devices, and the semantic web all point to the types of skills that students should be building. Moreover, students should be brought into the discussion about responsible use of these technologies and the directions that each technology should point.

There are also very good opportunities for customized learning, akin to digital tutors, that work with students on a more individual level. As algorithms for speech processing and text analysis continue to improve, the Clayton Christensen-style disruption that many in the higher-ed and K-12 space have been talking about for so long may finally be here.

 

cross-posted at http://nmc14portland.tumblr.com/