We’re into the beginning of our fourth week here at Singularity University.
I’d like to bring forth a few of the advances we’ve learned of. The outcomes and implications are many times unknown, and where we see possibilities, we also see threats. That’s why we need to understand the greater picture better – to find a new kind of mindset and also prepare – in terms of policy, law and ethics, or education and habits.
Case example. When you hear self-driving car, do you fear for your safety and think machines can’t possibly operate seamlessly in surprising situations?
DARPA organized a self-driving vehicle Grand Challenge in 2004. Most crashed and not one made the finish line. In 2005, all but one of the 23 finalists in the race surpassed the 11.78 km distance completed by the best vehicle in the 2004 race. Stanford’s Stanley came in first place. In 2007, The course involved a 96 km urban area course, to be completed in less than 6 hours. Rules included obeying all traffic regulations while negotiating with other traffic and obstacles and merging into traffic. Chevy -CMU won, VW-Stanford came in second, Virginiatech-Ford third.
On the same day that these cars made it seamlessly to the end obeying traffic, rules and sensing conditions , the worst highway crash pileup with 126 vehicles happened just next door in California. Because people didn’t drive slower in fog, people kept on driving full speed into the one ahead. I guess you get the picture, even without highlighting human error, distractions etc.
Do you still think Google self-driving cars are a bad idea? I do not. In Finland alone, every winter we lose people to stupid behavior in bad weather traffic. And every fall because of crashes with large animals. And every summer because of drunken drivers. 1,2 million die yearly worldwide because of humans not being good drivers. 40% of the accidents are due to drinking. Suddenly, a 360 degree sensor-packed car radar monitoring its surroundings seems like a safe option.
IBM Watson supercomputer, having no internet access, outperformed two of the all-time Jeopardy champions by far. This is an astonishing leap towards intelligent maschines – Watson understood even trick questions.
Maschine learning and natural language processing saw a breakthrough in Richard Rachid of Microsoft’s speech, where the spoken was translated real-time into Chinese. (7’22)
A logic-gated nanorobot was made for targeted transport of molecular payloads. What it does, simplified, is enclose a bad cell in it and destroys it.
I’ll speak more about robotics and artificial intelligence in a separate post, as besides the subject being huge, there are a lot of questions related to ethics, consciousness and law that easily arise.
Exponential technologies follow Moore’s law curve. I’ve now seen it in at least 6 presentations, last today in a great speech from Steve Jurvetson, a truly intelligent and pioneering investor.
To take off in mass, a technology needs to be made cheap enough. Only the rich can afford a technology when it almost does not work. When it does work, it will be free. Faster smaller cheaper better.
In March 2013 American researchers at Stanford University announced they had built a transistor out of DNA and RNA molecules.
There’s a lot to gain in e.g. medicine. Policy and legislation need to step up for the greater good of humans – FDA approvals for a drug may take decades, and leave thousands of people dying, just because they are afraid for one person to die if they do approve a drug. Was injecting the HIV virus into a child dying of leukemia legal? Was there a law? I don’t care, she lives.
I’d like for you to think of the greater societal and intercultural impacts of these technologies – how will people react? How do we handle intelligent machines? How can additive manufacturing (3D printing) change our economies, healthcare, make the cost of space exploration cheaper by 3D printing in space (like these SU alumni)? How could intelligent sensor systems solve global hunger?