I’ve come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you’re thirty-five is against the natural order of things.
– Source: Douglas Adams / The Salmon of Doubt
I recently had to attend a two day training away from home and found myself with some free time in the evening. A couple of friends decided that playing basketball would be fun. We breathtakingly called it quits after ten minutes . Luckily for me I wasn’t the youngest one there, so humiliation avoided – barely.
After that, we spent the rest of the evening sitting and talking. One of my friends works as a high-school teacher and he talked a bit about a project he was planning to do with his pupils. It sounded like a very fun project, but the thing that stuck with me from the entire conversation is not the project itself, but that he said that teenagers today don’t really know how to use computers.
The misconception which I – and many other people – have is that the millennials were born into technology and don’t need to be taught it. At first sight it surely looks that way – they can text like crazy, are familiar with the latest apps,1 and can’t imagine life without at least one computing device on their person. Once you drill down, however, things look very different.
The iPhone revolution made technology not only accessible to my mother – it also made it usable for my toddler. There is no longer a reason to memorise arcane command-line commands or how what clicks are required before you can change the environment settings in the non-Unix operating system. You can actually use your computer to get things done. Like playing Candy Crush. There is no incentive for understanding how it all works.
This is great.
And yet I’m left wondering how my children will learn to program and if they even need to.
When I was a teenager, I used to play a few games on my brother’s PC. I eventually started “programming” by writing a bunch of MS–DOS batch scripts, strengthened by some Norton utility programs, to make rudimentary quest games. After a quick stint with QBasic, I moved on the C and “real” programming.
I could start my journey because the machine that I was already using allowed me to tinker with it.
I wonder if tablets will provide a similar path. One of my biggest disappointments in the iPad is the part where it deviates from Alan Kay’s original vision of the DynaBook. If you go back to the original 1972 paper, you’ll see that the tablet was envisioned as tool for teaching children how to program a computer. While there are some educational apps on the iPad, you can only write iPad apps on a Mac. You can’t write iPad apps on the iPad.
My 4 year old son can use a mouse. He uses it to watch YouTube videos, but on a machine that, if he will ever want to, will enable him to do whatever he wants with it. Tom Preston-Werner is promoting the wonderful Codestarter project – because the answer is in the personal computer.
My generation grew up to consider anything non digital as obvious. All the complexities of Physics and Mechanical Engineering was abstracted away by the computer. I believe that a similar view will be placed on programming.