Skip to main content

One AI to Rule Them All

In my last post, I mentioned that I was interested in looking across the field of Computer Science to see if there my be other areas worth working in in the future, besides web-development. Primarily, I'm interested in a more powerful run-time environment. So Python or Clojure seem very enticing. A safe first step would be to learn about the architecture of server-side code, to deepen my understanding of Web-stacks, and also to satisfy my curiosity of what else is out there in terms of languages, VM's, and code paradigms.

Another motivation to look outside of web-dev has come more recently, in the form of some sage advice from an interview with Elon Musk. In it he states that there is too much talent working in the internet space [ 55:00]. So if web-dev is overcrowded, what would be a worthy Comp-Sci field to pursue?


In the same interview, Musk also alludes to the seriousness of the approaching revolution taking place in the field of Artificial Intelligence. He describes how even in the most benign future involving a hypothetical supper intelligent AI, we would be regarded as superfluous curiosities, or pets. But one only needs to think of movies like The Terminator and The Matrix to imagine a more malignant future. When asked if any of the leading tech companies caused concern for Musk, he answered "I wont name names, but there is only one", playfully paraphrasing Gandalf's ominous statement from JRR Tolkien's The Fellowship of the Ring:
Gandalf image from funbuzztime.com
“There is only one Lord of the Ring, 
only one who can bend it to his will. 
And he does not share power.”

The debate remains open as to which tech giant Musk is referring to (my first guess was Google). But after thinking on it, perhaps Musk is simply alluding to whoever get's there first, and by gets there I mean develops a general Artificial Intelligence capable of learning, thinking, and acting like a human (and Google definitely has a head start). Attaining this goal would result in a great source of power for those that possessed it. But this future AI wouldn't simply be a threat to our jobs and egos; it could quickly outpace even the most intelligent human. As outlined in this article by Tim Urban, once an AI attains a general level of intelligence, or Artificial General Intelligence (AGI), it could quickly surpass even the highest human IQ, because of what Urban calls "recursive self-improvement". He explains as follows:
[Most] of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didn’t involve self-improvement would now be smart enough to begin self-improving if they wanted to. 
 This is best illustrated with an example: 
[Imagine] an AI system at a certain level—let’s say human village idiot—[that] is programmed with the goal of improving its own intelligence. Once it does [improve itself], it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the [super-intelligent level]. This is called an Intelligence Explosion*,  and it’s the ultimate example of The Law of Accelerating Returns.
*This term was first used by one of history’s great AI thinkers, Irving John Good, in 1965. 
 So if this power remains in the control of a small few, they will wield a great power. And it is only natural that those with great power will not be willing to share it, as Gandalf declared (and Musk alluded too).

"There is only one"
Musk has made efforts to mitigate this potential risk by funding OpenAI (along with other tech notables such as Sam Altman, president of the startup incubator Y-Combinator). OpenAI is a 501c3 non-profit organization that shares Musk's sense of urgency towards the race to an AGI. They're hiring some excellent talent, and have recently released a beta version of a set of tools designed for training AI's. Their work is intended to remain free and open to the public, in order to keep AI from remaining in the hands of a select few.

But is this a worthy goal? Should the tools to develop an AI be freely available? Does sharing it with everyone pose a risk of it being taken and weaponized by any group with the will to do so? Despite the long list of those opposed to the development of autonomous weapons, Musk included, what's to stop a rogue group from doing so?

It is my belief that in order to safeguard against such an AI arms races, we should seek to build in the same elements that keep humans from taking up arms against each other: compassion, love, and empathy. For if an AI is able to achieve super intelligence, it must surely be able to achieve a sense of self-consciousness, as well as an appreciation for all living creatures, including itself. This may sound like I'm getting mushy, but I believe it is a worthy goal to try and lay down the foundations for machines to acquire the same 'theory of mind' that helps us empathize and understand each other. This would lead to AI's that are more suited to offer adequate service, maintain fairness, and respond to the needs of humans, even as they grow in autonomy.


In my next post, I'll be analyzing an example of a movie script that was written by an AI. Although the results were interesting and quite humorous, the lucid and child-like results show a glaring need for further development of the AI's architecture: by adding a process that controls the creation of context, a theory of mind, and a knack for narrative.

Thank you for reading,
-Nick


Comments

  1. Holy buckets! When did you become a professional tech writer? I thoroughly enjoyed all of that. Bravo.

    ReplyDelete

Post a Comment

Popular posts from this blog

About Joel Jonientz

A few days ago, one of my most influential professors died of a heart attack. His name was Joel Jonientz ( blog ), and he was 46. Joel was my teacher for a few projects, starting with an attempt at making a video game. His role was keep a bunch of misfit digital punks inline, and keep them on task with their delegated duties. I was a part of the music team, together with Bernie Thomas. Our job was to compose music for each level. This was pretty important since the game was based around the music, kind of like Dance Dance Revolution or Guitaru Man , where the player had to hit a button or something in-time with the music. But our game was different: it would be like Mario Bro's, a "platformer", where hitting a button in time with the music would give the player a boost to get up to a difficult platform, or some other super awesome power that would help them complete each level. Composing the music meant figuring out how to encode the required series of 'power...

Tech Archaeology: Unearthing the Artifacts of a False Prediction

Greetings. This is going to be a shorter rant. New year, new me! Anyway, I was inspired to write this after I caught myself falling into a usual habit: investigating the validity of a prediction which claims that a technology (it could be anything) will take over in the future. I'll start from the beginning. It all started when I was dutifully studying for my Databases class. While reading the textbook, Database Processing  (13th edition) by Kroenke and Auer, I came across a passage that was summarizing the history of database processing. Being that this book first came out around 1977, it has probably witnessed very few shifts  in the popularity of database technology over its existence; namely, the rise of Relational Model and its subsequent dominance. Never-the-less, in a table that describes the emergence of database technology, there is a row for the "XML and Web Services" era (after "Open-Source DBMS" and right before the "Big Data and NoSQL"...

Parallelism and Task-Decomposition: An Introduciton

Introduction Since I’m on holiday break from University, I’ve had time to begin investigating parallel processing. I’m going to try and share a little of what I’ve learned about the technology and how programming languages can leverage multi-core CPUs and GPUs. I’ll finish up and explain a bit about task-decomposition , which is an important aspect of writing algorithms intended for parallel processing. My investigation began one night as I was cooking dinner and watching a talk by Rich Hickey, the inventor of Clojure, titled “Are we there yet?” . At one point, while Mr Hickey was discussing parallel processes, the question of “how does a GPU process pixels?”popped into my head. Surely that must be parallel, right? Since my “Intro t...