Skip to main content

One AI to Rule Them All

In my last post, I mentioned that I was interested in looking across the field of Computer Science to see if there my be other areas worth working in in the future, besides web-development. Primarily, I'm interested in a more powerful run-time environment. So Python or Clojure seem very enticing. A safe first step would be to learn about the architecture of server-side code, to deepen my understanding of Web-stacks, and also to satisfy my curiosity of what else is out there in terms of languages, VM's, and code paradigms.

Another motivation to look outside of web-dev has come more recently, in the form of some sage advice from an interview with Elon Musk. In it he states that there is too much talent working in the internet space [ 55:00]. So if web-dev is overcrowded, what would be a worthy Comp-Sci field to pursue?


In the same interview, Musk also alludes to the seriousness of the approaching revolution taking place in the field of Artificial Intelligence. He describes how even in the most benign future involving a hypothetical supper intelligent AI, we would be regarded as superfluous curiosities, or pets. But one only needs to think of movies like The Terminator and The Matrix to imagine a more malignant future. When asked if any of the leading tech companies caused concern for Musk, he answered "I wont name names, but there is only one", playfully paraphrasing Gandalf's ominous statement from JRR Tolkien's The Fellowship of the Ring:
Gandalf image from funbuzztime.com
“There is only one Lord of the Ring, 
only one who can bend it to his will. 
And he does not share power.”

The debate remains open as to which tech giant Musk is referring to (my first guess was Google). But after thinking on it, perhaps Musk is simply alluding to whoever get's there first, and by gets there I mean develops a general Artificial Intelligence capable of learning, thinking, and acting like a human (and Google definitely has a head start). Attaining this goal would result in a great source of power for those that possessed it. But this future AI wouldn't simply be a threat to our jobs and egos; it could quickly outpace even the most intelligent human. As outlined in this article by Tim Urban, once an AI attains a general level of intelligence, or Artificial General Intelligence (AGI), it could quickly surpass even the highest human IQ, because of what Urban calls "recursive self-improvement". He explains as follows:
[Most] of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didn’t involve self-improvement would now be smart enough to begin self-improving if they wanted to. 
 This is best illustrated with an example: 
[Imagine] an AI system at a certain level—let’s say human village idiot—[that] is programmed with the goal of improving its own intelligence. Once it does [improve itself], it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the [super-intelligent level]. This is called an Intelligence Explosion*,  and it’s the ultimate example of The Law of Accelerating Returns.
*This term was first used by one of history’s great AI thinkers, Irving John Good, in 1965. 
 So if this power remains in the control of a small few, they will wield a great power. And it is only natural that those with great power will not be willing to share it, as Gandalf declared (and Musk alluded too).

"There is only one"
Musk has made efforts to mitigate this potential risk by funding OpenAI (along with other tech notables such as Sam Altman, president of the startup incubator Y-Combinator). OpenAI is a 501c3 non-profit organization that shares Musk's sense of urgency towards the race to an AGI. They're hiring some excellent talent, and have recently released a beta version of a set of tools designed for training AI's. Their work is intended to remain free and open to the public, in order to keep AI from remaining in the hands of a select few.

But is this a worthy goal? Should the tools to develop an AI be freely available? Does sharing it with everyone pose a risk of it being taken and weaponized by any group with the will to do so? Despite the long list of those opposed to the development of autonomous weapons, Musk included, what's to stop a rogue group from doing so?

It is my belief that in order to safeguard against such an AI arms races, we should seek to build in the same elements that keep humans from taking up arms against each other: compassion, love, and empathy. For if an AI is able to achieve super intelligence, it must surely be able to achieve a sense of self-consciousness, as well as an appreciation for all living creatures, including itself. This may sound like I'm getting mushy, but I believe it is a worthy goal to try and lay down the foundations for machines to acquire the same 'theory of mind' that helps us empathize and understand each other. This would lead to AI's that are more suited to offer adequate service, maintain fairness, and respond to the needs of humans, even as they grow in autonomy.


In my next post, I'll be analyzing an example of a movie script that was written by an AI. Although the results were interesting and quite humorous, the lucid and child-like results show a glaring need for further development of the AI's architecture: by adding a process that controls the creation of context, a theory of mind, and a knack for narrative.

Thank you for reading,
-Nick


Comments

  1. Holy buckets! When did you become a professional tech writer? I thoroughly enjoyed all of that. Bravo.

    ReplyDelete

Post a Comment

Popular posts from this blog

Programming Languages Are For Humans

Today, I would like to share my final paper from the Technical Writing course I took this semester. I intended this paper to be read by a general audience, so don't be discouraged when I say that it's about programming languages.  This paper isn't perfect, and I am by no means an expert, but I feel like it would be better to share this than to discard it.  BTW, not to gloat, but the main author whom I'm quoting thinks I did alright!  Excellent job Zengid. I think you actually understood the work quite well. Not easy given the amount of data in that paper! https://t.co/PzKhWmXZev — Andreas Stefik (@AndreasStefik) December 13, 2016 Note: I'll try to figure out how to add hyperlinks to the outline and the glossary without Blogger fudging it up, but maybe not right away. Programming Languages Are For Humans Syntax Design and Its Effect on Intuitiveness ------------------------------------------------------------------------------------------

About Joel Jonientz

A few days ago, one of my most influential professors died of a heart attack. His name was Joel Jonientz ( blog ), and he was 46. Joel was my teacher for a few projects, starting with an attempt at making a video game. His role was keep a bunch of misfit digital punks inline, and keep them on task with their delegated duties. I was a part of the music team, together with Bernie Thomas. Our job was to compose music for each level. This was pretty important since the game was based around the music, kind of like Dance Dance Revolution or Guitaru Man , where the player had to hit a button or something in-time with the music. But our game was different: it would be like Mario Bro's, a "platformer", where hitting a button in time with the music would give the player a boost to get up to a difficult platform, or some other super awesome power that would help them complete each level. Composing the music meant figuring out how to encode the required series of 'power