I recently googled “teaching kids about Artificial Intelligence” (under parenthesis for exact match). Back came a search page with three results. When a google search yields so few entries, you know you either typed gibberish, an irrelevant topic, or a topic that is not yet well researched. After checking my spelling, I decided to look a bit further into the matter.
As a father of 8-year old twins in a transient era, I grapple with this subject. Some young parents will disagree: ‘another crazy introduction to an increasingly demanding educational agenda; let kids be kids.’ Yet it is hard to ignore. Over the past few years we have all witnessed the breakthrough progress in application of AI in our everyday lives, and have been bombarded with publications and articles about the magnitude of its impact on the future of jobs.
This begs the question: when is it too early to introduce our kids to the single most important technological development of our time? A development that will not only create social realities that our kids do not observe in the world today (e.g., my kids will likely never need to obtain a driver’s license), but also, and more importantly, seismically shift their livelihood opportunities?
To start the dialogue, we should look into three derivative questions:
1. Can young children understand the concept of AI?
2. If so – how do we make it palatable to them (to use, to interact with)?
3. Even if we can make it accessible – why is it important?
I do not have comprehensive answers to these questions, but here are a few thoughts on each.
Can young children understand the concept of AI?
As an alum, I was heartened to find the following video of a class about AI taught a few months ago by Professor Hod Lipson to a group of kids aged 6 and above at Columbia Business School:
If you have 40 minutes to spare, I recommend you watch it. Before commenting on it further, consider the following. When the Gen-X parents of today were teenagers and received their first IBM personal computer, we were all able to open, with a press of two side-buttons, the big hardware box and peer in awe at its marvelous integration of transistors and printed circuit boards. We also experienced on a daily basis the hardware’s limitations of storage and processing power. In today’s world, capacity and computing power have become ubiquitous and increasingly on the cloud. Few of the millions of users worldwide dwell on it.
AI is new to our lives. While its science began in earnest in the late 50’s, modern everyday applications are only a few years in the making. As with anything new and exciting, many of us are gearing up on topics such as computational linguistics, machine learning algorithms, and other new disciplines that make software process and make decisions like a human. We want to understand the actual mechanics of how a computer program can distinguish between pictures of cats and dogs, or between green and yellow. It is likely that our kids will not dwell too much on any of these. AI will be omnipresent and ubiquitous in their lives. It will matter less how it works, and more how to wield it to approach new frontiers (already today many leading AI platforms are open source and available to all).
If we accept this notion, the challenge of explaining AI to young children becomes more approachable. We do not really need to teach them what is cognitive computing and how an algorithm works. Instead we can focus on simpler concepts, such as: machines, like babies, can learn from examples; they get better very quickly and can do things much faster than a human; humans have a role in designing their usage to make our lives better; and like any tool, it can be used for doing good, or bad. Arguably, understanding these building blocks will be more beneficial in their future then understanding what a neural network is, and likely sooner than we think.
Back to that class at Columbia. Watching very young kids (as young as 6 or 8) grapple with these concepts is fascinating, evident by the profound, and sometime humorous questions they ask, questions that go to the heart of AI:
– “When is technology considered Artificial Intelligence?”
– “How do we deal with people losing their jobs”? “Will policemen lose their jobs?”
– “How will AI affect weapons?”
– “Can AI combine both rules and machine learning?”
– “Can AI swim in Water?”
Our kids interact with Alexa daily. They have toys that use visual and voice recognition. They hear about self-driving cars. We should assume they have the capacity to understand the broader concepts of AI and the questions it poses, fueled by the endless curiosity of a child.
How do you make AI Accessible to kids?
This is virgin territory, but there are some signs of the technology world picking up the challenge.
Consider start-up applications like Pika: https://www.digitaltrends.com/mobile/pika-kids-teach-ai-colors/
Pika is still in fund raising stage, but its approach is well thought out. The mobile application does not try to teach kids what AI is or how it works. Instead, it lets them use it. The app combines a camera, augmented reality, and computer vision algorithms, all wrapped in a friendly user interface that allows a 5 years-old to teach a software robot new skills, such as identifying colors. The novelty is likely not in the engine or its skills, but in the immersion of kids in the world of AI and the seamless approach it uses to introduce them to the concept of helping a machine learn.
It is likely only a matter of time until we see one of the AI giants take such ideas to the younger audience.
Why is it important?
The more obvious reason is where we started. Our kids are growing in a world that is changing exponentially. Today’s parents, with all the means at their disposal, may be the most ill-equipped to-date to describe to children how the world will function when they grow up. An answer to the famous question – “what do you want to be when you grow up?” now comes with a twist. A pilot or a surgeon may not exist in the future, certainly not in their current form or capacity. And it will be morphed by the engineers of the future who will wield AI to redefine these professions, and the future role of humans in it. Like parents before us, we can only give our kids the best tools at our disposal today to cope with the problems of the future. Understanding basic concepts of AI looks more and more like a useful tool.
An equally good reason to introduce AI to Kids may lie with the future of innovation. A couple of weeks ago a compelling development in artificial intelligence was reported, involving the game of chess. https://www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match.
Super computers beating the best human chess players in the world is no news. The new chess engine, AlphaZero from Google’s DeepMind lab, is big news however. The machine learning engine was taught only the rules of the game, with no chess strategy books, no opening or end-game frameworks, and no massive records of historical matches to learn from. It was not taught how to play the game. Instead, the program was given an innovative configuration for applying its general machine learning algorithms to develop its owns chess expertise. After only 4 hours of self-learning, AlphaZero was able to overwhelm the reigning chess program champion, Stockfish, which was developed using traditional methods and has been self-learning for years. ‘Overwhelm’ is a big understatement in this case – in a hundred games AlphaZero only won or drew and never lost, the equivalent of a knockout. During its dominant performance, AlphaZero at times made plays that seemed to expert chess observers as ‘alien’ moves, completely unexpected and irrational at first and second glance. It also proved to be a lot more efficient, considering only 1/1000 positions per second compared to its defeated counterpart. The machine learning engine became the best chess player in the world by inventing from scratch its own set of strategies and frameworks, all in the time it takes to watch two movies. Hundreds of years of human chess experience and frameworks were irrelevant.
The ramifications of this go far beyond chess, of course. The great innovations of the future may no longer rely on deep human expertise in the field where solutions are sought, but on deep human expertise in machine learning and its configuration. Previous teachings and human frameworks may even become a hindrance to developing the best learning and most efficient problem solving machines. Future innovation in a given field may be just as likely to come from a teenager who is well versed in machine learning than a 20-year expert in that field who is not. We can liken machine learning to a new form of communication, one that all engineers of the future will need to be versed in (even if not experts on) for solving problems with the aid of computing power. And as we know, mastery of any language by a human is better achieved when started at an early age rather than adolescence.
* * *
Before writing this commentary, I decided to show my 8-rear old twins the video from Columbia Business School. I was not expecting much, but was genuinely surprised by the level of interest they took in it, their understanding of basic concepts and the profound questions they asked. After being quizzed by my daughter for a good 30 minutes with questions I did not always have answers to, she said she wanted to think about it a bit more while taking a shower. When she came back, she had two more:
“Will machines be able to turn themselves on?” was her first. “That is the million dollar question no one is able to answer yet,” I said.
“One last question, Dad. You know how much I love writing… can I still be an author when I grow up?” “I think you’re safe there,” I answered.