In my last post, I mentioned that I was interested in looking across the field of Computer Science to see if there my be other areas worth working in in the future, besides web-development. Primarily, I'm interested in a more powerful run-time environment. So Python or Clojure seem very enticing. A safe first step would be to learn about the architecture of server-side code, to deepen my understanding of Web-stacks, and also to satisfy my curiosity of what else is out there in terms of languages, VM's, and code paradigms.
Another motivation to look outside of web-dev has come more recently, in the form of some sage advice from an interview with Elon Musk. In it he states that there is too much talent working in the internet space [ 55:00]. So if web-dev is overcrowded, what would be a worthy Comp-Sci field to pursue?
In the same interview, Musk also alludes to the seriousness of the approaching revolution taking place in the field of Artificial Intelligence. He describes how even in the most benign future involving a hypothetical supper intelligent AI, we would be regarded as superfluous curiosities, or pets. But one only needs to think of movies like The Terminator and The Matrix to imagine a more malignant future. When asked if any of the leading tech companies caused concern for Musk, he answered "I wont name names, but there is only one", playfully paraphrasing Gandalf's ominous statement from JRR Tolkien's The Fellowship of the Ring:
“There is only one Lord of the Ring, only one who can bend it to his will. And he does not share power.” |
The debate remains open as to which tech giant Musk is referring to (my first guess was Google). But after thinking on it, perhaps Musk is simply alluding to whoever get's there first, and by gets there I mean develops a general Artificial Intelligence capable of learning, thinking, and acting like a human (and Google definitely has a head start). Attaining this goal would result in a great source of power for those that possessed it. But this future AI wouldn't simply be a threat to our jobs and egos; it could quickly outpace even the most intelligent human. As outlined in this article by Tim Urban, once an AI attains a general level of intelligence, or Artificial General Intelligence (AGI), it could quickly surpass even the highest human IQ, because of what Urban calls "recursive self-improvement". He explains as follows:
[Most] of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didn’t involve self-improvement would now be smart enough to begin self-improving if they wanted to.This is best illustrated with an example:
[Imagine] an AI system at a certain level—let’s say human village idiot—[that] is programmed with the goal of improving its own intelligence. Once it does [improve itself], it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the [super-intelligent level]. This is called an Intelligence Explosion*, and it’s the ultimate example of The Law of Accelerating Returns.
*This term was first used by one of history’s great AI thinkers, Irving John Good, in 1965.So if this power remains in the control of a small few, they will wield a great power. And it is only natural that those with great power will not be willing to share it, as Gandalf declared (and Musk alluded too).
"There is only one" |
But is this a worthy goal? Should the tools to develop an AI be freely available? Does sharing it with everyone pose a risk of it being taken and weaponized by any group with the will to do so? Despite the long list of those opposed to the development of autonomous weapons, Musk included, what's to stop a rogue group from doing so?
It is my belief that in order to safeguard against such an AI arms races, we should seek to build in the same elements that keep humans from taking up arms against each other: compassion, love, and empathy. For if an AI is able to achieve super intelligence, it must surely be able to achieve a sense of self-consciousness, as well as an appreciation for all living creatures, including itself. This may sound like I'm getting mushy, but I believe it is a worthy goal to try and lay down the foundations for machines to acquire the same 'theory of mind' that helps us empathize and understand each other. This would lead to AI's that are more suited to offer adequate service, maintain fairness, and respond to the needs of humans, even as they grow in autonomy.
In my next post, I'll be analyzing an example of a movie script that was written by an AI. Although the results were interesting and quite humorous, the lucid and child-like results show a glaring need for further development of the AI's architecture: by adding a process that controls the creation of context, a theory of mind, and a knack for narrative.
Thank you for reading,
-Nick
Holy buckets! When did you become a professional tech writer? I thoroughly enjoyed all of that. Bravo.
ReplyDeleteThank you Dan!
Delete