The pioneers of our future – Who invented AI Technology?

The pioneers of our future - Who invented AI Technology?

Artificial intelligence or AI has the potential to revolutionize our world. The way we do things and how we live. AI will be one of those big tools that propel us into a new future like computers and the internet did decades ago. Recently, we’ve seen many examples of neural nets in particular from speeding up video game production and making graphics more realistic to solving age-old physics problems like the three-body orbit problem. So that’s all well and interesting. But we have to recognize that, today in the field of AI, we’re building off the shoulders of giants. So, the question must be asked who were those original giants, how did AI come to be, who were the people that first dreamed their computers could think for themselves, who are the pioneers of AI.

Soon as computers came into existence, scientists began thinking about how they could revolutionize our world. Even in the 1960s, they theorized that one-day computers would be able to think for themselves. There are many pioneers that laid the foundation of AI. Even as far back as Aristotle. Introducing associationism in 300BC and would start our attempt to understand the human brain. But in this article, we are going to focus more on the recent notable contributions. The so-called “Fathers of AI”.

The first attempt and the beginning of AI, all starts with the psychologist, Frank Rosenblatt in 1957. In that time, he developed what was called “Perceptron”. A perceptron was a digital neural network, that was designed to mimic a few brain neurons. Frank’s first task for the network was to classify images into two categories. He scanned in images of men and women and he hypothesized that over time the network would learn the differences between men and women or at least see the patterns that made men look like men and women look like women. Just a year later, the media caught onto the idea and the hype was strong. In 1958, the New York Times reported that the perceptron was to quote the Embryo of an electronic computer that will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Unfortunately for Frank, despite the higher, his neural network system didn’t work very well at all. This is because he only used a single layer of artificial neurons making extremely limited in what it could do. And even worse there wasn’t much that could be done about it at the time.

Computers of that day could only handle this simple setup. These problems were never solved. In 1969, the computer science community had abandoned the idea and with that AI was dead. Everyone may have given up on the idea. But decades later, a keen computer scientist by the name of Geoffrey Hinton thought that everyone else was just plain wrong. He theorized that the human brain was indeed a neural network and the human brain evidently made for an incredibly powerful system. To him, this was as much proof as he needed. The artificial network had to work somehow. Maybe they just needed some tweaking. Hinton saw the genius in the idea that everyone else missed. Hinton is the superstar in the AI world having authored 200 peer-review publications. Hinton was instrumental in the fundamental research that brought about the AI revolution. After studying psychology, Hinton moved into computer science and pursued his lifelong quest of muddling the brain. Originally from prison in the UK, he moved to the University of Toronto. In Toronto, he would go on to develop multi-layer neural networks. He and his team quickly realized that the problem with Frank Rosenblatt’s single-layer approach was that more layers were needed in the network to allow for much greater capabilities and the computers of the day were now powerful enough to handle it. This multi-layer approach solved the problem that Frank had. The neural networks were much more capable. Today we called this multi-layer approach a deep neural network. In 1985, Hinton co-authored a paper that introduced the Boltzmann machine. Boltzmann machines are the fundamental building blocks of early deep neural networks. You can think of them like the Ford Model T of neural networks. Without getting into the details. The concept is to have groups or layers of neurons communicate in such a way where each artificial neuron learns a very basic feature from any data.

soon others began innovations based off deep neural networks. A self-driving car was built in the late 80s on neural networks. And later in the 90s a man by the name of Yan Li Kun would build a program that recognized handwritten digits. This program would go on to be used widely. But Yan Li Kun go on to be an AI pioneer in his own right. Li Kun study under Geoffrey Hinton and would lead the research that made Henson’s theory of backpropagation our reality. Back propagation in simple terms is the process of computers learning from their mistakes and hence becoming better at a given task. Much the same way, humans learned from trial and error. However, the idea of AI being used for much more was short-lived. The field was stifled by two problems. One, slow and inadequate competing power and two, a lack of data. A burst of investor confidence was eventually met with disappointment and the research money began drying up. Geoffrey would become ridiculed and forced to the sidelines of the computer science community. He was seen as a fool for his long-standing faith in a failed idea. Undeterred by the opinion of his colleagues, Hinton pursued his dream with an unfazed obsession. In 2006 the world had finally caught up to him. Computer processing speed had grown significantly since the 90’s. Moore’s law observed by Intel’s co-founder Gordon Moore stated that the number of transistors per square inch doubles about every two years. This meant that computers were growing and processing power exponentially. That’s the first problem solved. Meanwhile to the advent of the internet some 15 years earlier, a wealth of data had been acquired and this solved thee second problem. The ingredients of AI were now there. The computers were powerful enough and there was enough data to play with. By 2012, the ridicate Geoffrey Hinton was now 64 years of age continuing the work wasn’t an easy task. Hinton was forced to permanently stand due to a back injury. That would cause a disc to slip out whenever he sat down.

The birth of the modern AI movement can be traced back to a single date. September 30th 2012. On this day Geoffrey and his team created the first artificial deep neural network to be used on a widely known benchmark image recognition test called “imagenet”. Hinton’s program was called Alec’s net and when it was unleashed on this state, it had performance like no one had ever seen. Alec’s net destroyed the competition. Scoring an over 75 success rate, 41% better than the best previous attempt. This one event showed the world that artificial neural networks were indeed something special. This sent and earthquake through the science community. A wave of neural net innovation began and soon world took notice. After this point, everyone began using neural networks in the image benchmark challenge. And the accuracy of identifying objects rose from Hinton’s 75% to 97% in just seven years. For context 97% accuracy is surpassing the human ability to recognize objects. The computer recognizing objects better than humans has never happened in history. Soon the floodgates of research and the general interest in neural networks would change the world. BY the late 2010, image recognition was coming place. even recognizing disease and medical imaging, images were just the beginning. Soon neural net AI was tackling the video, speech, science and even games. Today we see AI everywhere.

Tesla, among many companies has created a sophisticated self-driving AI which already sharing the road with humans. It is predicted their self-driving cars will reduce accidents by up to 90%. While smart traffic lights would reduce travel time by 26%. Netflix and YouTube even use AI to learn what shows you watch and recommend new ones. Uber uses machine learning AI to determine surge pricing your rides and estimated time of arrival and how to optimize the services to avoid detours. There’s also a new interesting hide and seek AI as shown here by the YouTube channel “two-minute papers”. In this scenario two AI teams battle against each other. One outsmarting the other as each round of the game persisted. After a given time. One of the teams figured out how to break the game’s physics engine in order to win. This was something that the researchers never anticipated. It is the potent demonstration of AI’s problem-solving abilities. The popular app Tik-Tok is completely AI driven.

So now AI is everywhere. It’s in our daily lives even if we’re not aware of it. Of course, there is many examples of AI being used. But perhaps the most interesting uses will come after we reach singularity.

Conclusion
Artificial intelligence has rapidly grown in the span of less than two decades from the fringes of science to the centerpiece of the world. Without the work of these pioneers who refuse to give up, our future may be very different. Perhaps we don’t understand the potential of AI but nonetheless, it should be obvious that their work has created a significant point in human history. Much like the invention of fire, the wheel, electricity, computers and the internet, artificial intelligence will be one of humanity’s greatest tools. Due to his back condition, Geoffrey Hinton hasn’t sat down for the last 12 years. At 71, we hope Hinton will keep standing for many more years to come. While AI is helping many people today, we can only hope that will continually be used for good in the future.