🍪
We use cookies by default for analytics to deliver a better user experience. Read the privacy policy for more information. You can choose to opt-out by clicking "No, Opt-Out".

The Future of AI and Human Evolution

By Rohan Khanna
The Future of AI and Human Evolution

13.8 billion years ago the Big Bang happened. From then to now many things have happened that have molded our species into existence. At the time of the Big Bang, large rocks floating in space extended a gravitational pull on each other, which led to the creation of the Milky Way Galaxy and eventually the creation of our home, the Earth. Atoms came together to form molecules and around 3.5 billion years ago these molecules formed unicellular microorganisms that contained DNA, the most important building block of life and evolution. Around 1.2 billion years ago, single-cell organisms started communicating with each other to effectively work as a single unit and evolved into multicellular organisms. Over time, these multi-cellular organisms evolved and 2.2 million years ago the first members of the homo genus, started walking the Earth. 195,000 years ago these members of the human species evolved to form homo-sapiens, human beings. Ever since then, our environment has undergone a lot of changes and as a result human life, like all life, has also evolved to adapt to these changes. We went from being heavy-built dull-witted hunter-gatherers to the smartest, and consequently, the most powerful species on the planet that dominates all life on Earth. We evolved into a species that uses intelligence for its survival, rather than brute strength. The development of Artificial Intelligence will eventually lead to the next phase of human evolution, allowing us to achieve super-intelligence, with the caveat that this development is regulated in a way that ensures human safety.

What makes human intelligence so unique? What about our brains makes us smarter than other animals? In the fossils of early mammals, who came into existence about 200 million years ago, researchers found a thin covering around their small brains. This covering, called the Neocortex, made these mammalian animals capable of a new way of thinking, which was never seen before. The non-mammalian creatures, creatures without the Neocortex, were only capable of fixed behaviors; it took their brains generations to evolve a new fixed behavior and add it to their list of known behaviors. The addition of the Neocortex in the brain allowed mammals to invent new behaviors that they could add to their list of known behaviors, enabling them to learn from their own experiences. To put it simply, the Neocortex allows the brain to invent and learn new things. Around 65 million years ago, an asteroid collided with the Earth and wiped out 75 percent of the Earth’s animal and plant species. As time went by, evolution favored the development of the neocortex, presumably because it increased the chances of survival. As mammals became bigger, their brain size grew at an even faster pace and the neocortex started covering more and more surface area around the brain. Today, 80% of our brain is the neocortex and this is where most of our conscious thinking is done. The non-neocortex part of our brain, the “original” brain, is what gives us our basic drive and motivations to achieve a goal, and the neocortex figures out how to achieve that goal.

Researchers have delved deeper into how the neocortex works. The neocortex is divided up into 300 million modules, each capable of learning, remembering, and implementing patterns. These modules are organized in hierarchies, each more abstract than the next, which we dynamically create with our own thinking. A low-level module may light up when it sees a capital A, this will send a signal to modules higher up in the hierarchy to watch out for words that start with ‘A’. Say we see the letters A-P-P-L, this would light up the module that recognizes the word “Apple” and will send a signal to the module that recognizes the letter ‘E’ to expect an ‘E’. Individual modules in the neocortex light up and send signals when they observe certain characteristics or features. As these signals go up the hierarchy, they collectively allow your brain to come to a conclusion. This is how the brain performs any task, be it as simple as reading a sentence, or as complex as coming up with an algorithm to revolutionize Artificial Intelligence.

Artificial Intelligence (AI) has come a long way since its creation in the 1950s. Deep Learning, a branch of AI that is proving to be the most effective way to achieve artificial intelligence, is modeled after the Neocortex and learns just like the human brain. In recent years, this technique of Machine Learning to achieve artificial intelligence has seen a great deal of success and has therefore encouraged experts working in the field to move their focus to Deep Learning. Before Deep Learning, AI programmers would painstakingly develop artificially intelligent agents manually. These agents were not accurate, efficient, or scalable. Deep Learning changed the game. Rather than manually creating knowledge representations and logic models, deep learning algorithms allow the agent to learn from data, much like a human baby. As a result, a neural network based on deep learning gets better as we give it more data and computation time. Not only are the outputs from these agents highly reliable, but also their Neural Networks are highly scalable; the same Neural Network used to categorize pictures of dogs and cats, can be expanded to create a Network that detects cancer or one that recognizes faces.

Deep Learning is an algorithm modeled after the Neocortex. As a result, it has no theoretical limitations on what it can achieve. A deep learning agent is an artificial Neural Network (NN), whose elementary unit, called a Neuron, works just like the neurons in the human brain. The way Deep Learning works, in a simplified model, is that you define multiple layers of neurons, starting with an input layer, multiple hidden layers, and an output layer. There are no bounds on how many neurons you can have in each layer or on how many hidden layers your network contains. You start by defining a bunch of input parameters based on the data you have. Standardized or normalized input values are then placed in input nodes, which make up the input layer. Each node is then multiplied by a certain weight and passed through multiple “hidden” layers of neurons, also called perceptrons. The perceptrons are where the magic happens, not really magic, just some complicated math. In the first hidden layer, each perceptron takes certain input values, multiplied by their weights, and then calculates the sum. The perceptron then passes this sum through an activation function, which is predefined for all the perceptrons in a given layer. An activation function decides whether a neuron “fires” or not, based on its input value. This is what adds human-neuron-like functionality to the perceptrons. Depending on how it is defined, the activation function, allows the perceptron to fire when its input value satisfies certain threshold values, outputting a zero otherwise. Simply put, the activation function looks for a particular feature in the input values and makes the perceptron fire when it observes this feature. Each perceptron then outputs a value based on its activation function. The output values are then multiplied by different weights again and passed to another layer of perceptrons. This cycle goes on and on until the data reaches the output layer. At the output layer, your network predicts an output value and compares it to the real value, which is specified in the training set. The value of the comparison is inputted to a cost function, which determines the error in the NN’s prediction. By using gradient descent on the value of the cost function the NN works on minimizing the cost function and updates the weights of the entire network through backpropagation, with each iteration of the training data. This is key to how the neural network learns. With time and a lot of training data, the network learns how the input parameters influence the output and in what way, giving us a highly accurate and scalable artificially intelligent agent.

To better understand how Deep Learning works, let's consider a simple example that uses Keras, a deep learning library that runs on TensorFlow. Imagine you are a real estate agent and you need to predict the value of a property. One way to do this is to find the cost of any other properties being sold in the neighborhood with similar square footage and architecture and evaluate the cost of your property based on that. This method poses multiple problems including lack of scalability and inaccurate estimation of property values. A better way to do this is to use Deep Learning. A real estate agent can create a NN that can more accurately and efficiently evaluate property values for any property in a state. This approach will make predictions that are highly scalable and not just based on the experience of the real estate agent. First, the real estate agent needs to find a dataset of all the property values in his region. Once you have the data, you can define the input parameters based on the data. The beauty of this approach is that the real estate agents do not have to approximate how an input parameter affects the property value. This approximation is the biggest reason for conflict in evaluating property values; this is because all real estate agents make the approximation based on their own different experiences, creating non-uniformity in cost estimates. The deep neural network, on the other hand, figures out how each input parameter affects the output based on all the data. After you define the input parameters, it is time to create the different layers of our neural network. For the purposes of this example, we will create a simple NN with one input layer, one hidden layer of perceptrons, and one output layer. In the input layer, the number of neurons/nodes is equal to the number of different input parameters in our dataset. For the hidden layer, there is no fixed number of neurons that has to be defined, this number is up to the discretion of the programmer; however, a general rule of thumb is to take the average of the number of input and output nodes. You also need to define the activation function, a kernel initializer, which initializes the weights with starting values close to zero, the input dimension, which, simply put, is the number of nodes in the input layer and an output dimension, which is the number of nodes in the next layer. The value for the activation function can vary, and it is your job to figure out which activation function is best for your NN. Now, we add the last layer of our NN, which is the output layer. This layer only contains one node, which is the predicted price of the property. Now it's time to configure the learning process of the NN by compiling it. At this step, you specify the cost function and the optimizer, both of which influence the weights of the network, which in turn enables our deep learning agent to learn how each input parameter affects the output property value. The neural network is now ready for training. You now pass all the data through the network multiple times and wait for the network to finish training, this wait time can vary from 10 minutes to 10 weeks, or even more, depending on a number of factors, including the size of the dataset, the number of epochs and the computational power of your machine. The time it takes to train deep learning agents is one of the biggest challenges that researchers are trying to overcome. After your NN has completed the training process, you can see the accuracy with which it computes the property value. By experimenting with more hidden layers, different cost functions, optimizers, activation functions, the number of epochs, and the way you pass the data you could increase the accuracy and get more consistent results. Our neural network is now ready for use. It provides real estate agents with a comprehensive, accurate, and scalable way to analyze property values.

With the invent of Deep Learning, AI research and development have been at their absolute peak in the last few years. While there are hundreds of computer programmers and researchers working on creating AI solutions that are changing our world rapidly, there are many naysayers of AI as well. These naysayers believe that the development of AI is dangerous and needs to be highly regulated as it could eventually lead to a dystopian future, with high economical and social disparities. Some people also believe humans would become completely obsolete in the future, and artificially intelligent agents would take over the Earth, keeping the few who created them rich and happy. While a lot of the concerns of the general public have their roots in misinformation and lack of knowledge, there are some serious concerns brought up by experts in this field and we need to start thinking of solutions to these problems now, rather than dealing with them when they pose a serious threat. Experts like Jeremy Howard and Petre Prisecaru believe that we will soon automate all low-level and service-based tasks, thus making about 80% of all the jobs in the world obsolete. Deep Learning-based networks already have successful applications which involve agents to perform service-based jobs like reading, writing, listening, speaking, and integrating knowledge. So the day is not far when humans in low-level service jobs are replaced by machines entirely. To combat the economical disparity that could arise in the future, some economists suggest a Universal Basic Income (UBI), which as an idea sounds great on paper, however, it is extremely hard to implement. UBI also doesn’t provide a solution to our constant need to do something. As a species, we have evolved so we can not be idle, we constantly need something to engage us. With thousands of people out of jobs, even if humans are taken care of monetarily, it could still result in major political and social unrest. Another area of concern for some experts is that AI will attain superintelligence. AI researchers believe that artificial intelligence agents will achieve human-level machine intelligence by 2050 and from there superintelligence is inevitable. This is because electrical signals travel considerably faster and can fire more information in a second than biological signals and an electrical neural network has no size constraints on it whereas the human brain can only be so big. Super intelligent machines could possibly rule over all life on Earth, as the history of evolution has shown us, the smartest species controls the planet.

What I am suggesting for the future of AI, will allow us to skirt around the major concerns experts have regarding its development. As we have seen through this paper, Deep Neural Networks and the neocortex in the brain function in the same way. Deep neural networks are based on the neocortex and they both learn by studying a multitude of data and using a collection of neurons to reach conclusions. Thus, a deep neural network can be seen as an extension of the neocortex. This is exactly why I believe that AI is the next phase of human evolution. AI should not be used to replace human beings with machines, rather we should use AI to expand our own minds and move toward a new hybrid way of thinking, which is both biological and electrical. Right now, with only 300 million modules in your neocortex, you are one of the smartest beings on our planet, imagine if your neocortex was extended to the cloud, where it has no physical bounds. Economical, social, and political unrest would only occur if artificially intelligent machines were to take over human jobs, not if we use AI to enhance our way of thinking. The threat we face with super-intelligent machines is one that needs to be addressed now. We need to figure out a way so that the morals and values of these super-intelligent machines align with ours. The development of AI should be regulated, so anyone working on general or super intelligence makes sure to create an agent with a similar value system to our own. If we overcome this obstacle, we could achieve hybrid thinking in this century. 

The last time the neocortex expanded was 2 million years ago. When we evolved into humanoids, our neocortex expanded to what is now our frontal cortex, i.e. the forehead. This purely quantitative increase gave us the ability to create the world we live in today and dominate over all other species of our planet. I cannot even begin to imagine, where this new hybrid way of thinking will take us.

Please get in touch to learn more about how Biz-Tech Analytics can bring a data-centric workflow to your AI project.

Sign Up For Our Newsletter

(*) required

Get In Touch

+91-11-26218994 contact@biztechanalytics.com