We’re used to robots – we read about them, see them at the cinema, in art, graffiti, learn about them in science and technology class, we build them, and our lives are full of them.
Watching the video, I was struck by the very broad implications of “we” (and I couldn’t help being reminded of some school situations I know when listening to the conversation between the Luddite horses).
I wondered, can the future be this bleak, problematic, cut-and-dried for humans? Will our lives become too full of robots? I searched for other opinions…
The video below made me feel a bit more hopeful (although it’s directed at “teachers”, I think the word “Learners” would work, too:
Then, as if in answer to my searching, a handful of blog posts appeared in my RSS reader the last few days:
MIT study says robot overlords could make for happier human workers – describes one way the “Humans need not apply” scenario will come to pass. (Note that this is a tiny, tiny study.)
“Automation in the manufacturing process has been around for decades, but the new study aimed to seek out the sweet spot where human workers were ‘both satisfied and productive.’
” ‘We discovered that the answer is to actually give machines more autonomy, if it helps people to work together more fluently with robot teammates,’ said project lead Matthew Gombolay…” (Engadget, 25 August 2014)
“New technologies are shaping the way we learn. The more we control them, the bigger impact we can have on where the education evolves. An interesting infographic created by Aleksandar Savic for Lenovo shows the opportunities advanced technologies give to educators and students.”
MakeUseOf offers an article titled How Computer Technology Will Transform Schools of the Future. Look at Point 2 “Robot Graders are the Future”:
“…Luckily, we have robots for that now. Khan Academy includes exercises after every few lessons to let you test your knowledge of the material covered, and users instantly know whether or not they got the exercise correct, and can access hints on solving the exercise…”
Engadget informs us about one of the ways robots are getting smarter, and how we can help them in Robo-Brain Teaches Robots How to Understand the World.
“…According to project lead Ashutosh Saxena from Cornell (the study’s a joint effort between Brown, Cornell and Stanford Universities as well as the University of California, Berkeley), his team’s goal is to “build a very good knowledge graph — or a knowledge base — for robots to use.” Think of Robo Brain as Wikipedia … that robots can tap into when they need to understand how we speak and how we see the world — both extremely important if they are to organically perform their tasks…”
More about robot learning can be learned from this video, Robot Learning: Perception, Planning and Language”:
I couldn’t help but begin to think that the IB Learner Profile should perhaps become the “Human Profile”…
“The IB Learner Profile is the IB mission statement translated into a set of learning outcomes for the 21st century. .. It is a set of ideals that can inspire, motivate and focus the work of schools and teachers, uniting them in a common purpose.” (ibo.org)
Having seen a world “peopled” by robots in the posts and videos above, I thought about the Learner Profile (inquirers, knowledgeable, thinkers, communicators, principled, open-minded, caring, risk-takers, balanced, reflective). Which of the Learner Profile attributes could be used to describe robots? Which can describe only humans? On a well known Theory of Knowledge web page I found a link to this post, under Ethics: Teach Robots Values So They Won’t Kill Us With Kindness
“I am deeply saddened by the inability of robots to do something as simple as telling apart an apple and a nectarine,” says engineer, futurist and CEO of Poikos, Nell Watson. Speaking at The Conference in Malmo, Watson makes the case that as robots get smarter and more capable, we are going to need to teach them human values in order that they don’t end up destroying us — either out of malice or kindness….
“The most important work of our lifetime is to ensure that machines are capable of understanding human value,” she (Nell Watson) says. “It is those values that will ensure machines don’t end up killing us out of kindness.” (link)
Ah yes – that tricky phrase “human values”!! If you listed your own personal values, would you “look” like a robot, or a human? If you are attracted by the field of robotics, how do you imagine defining, and then teaching “human values” to a robot? What will your ethics of artificial intelligence be? You might want to read The Challenge of Moral Machines, by Wendell Wallach, in the July/August 2014 issue of PhilosophyNow.
“…The building of moral machines provides a platform for the experimental investigation of decision-making and ethics. The similarities and differences between the way humans make decisions and what approaches work in (ro)bots will tell us much about how we humans do and do not function, and much about what, and who, we are.”
Image credit: creative commons licensed (BY-SA) flickr photo by jmorgan: http://flickr.com/photos/jmorgan/5164271