Luke, Online English Teacher at Acadsoc.
Want to know more about Acadsoc?
Want to be an online English teacher and write for us?
Apply via acadsoc.ph !
Imagine reaching for the constellations – in a collision-course with the universe as our limited knowledge alludes to ‘accurate’ calculations and perceptions. Then imagine creating the perfect citizen, that does not break the rules, and states the impossible to be such – barring any form of imagination if it cannot be quantified using science… Makes you think, doesn’t it?
Take for instance the renowned scientist and informal philosopher Stephen Hawking who dedicated his entire existence to challenging society’s definitions and limitations. At the age of 21, he was diagnosed with amyotrophic lateral sclerosis, a disease which causes the death of neurons controlling voluntary muscles. Doctors gave him between two or three years to live, but rather than be limited by the diagnosis, Stephen Hawking made it a point to push himself to explore the wonders of the world and the universe alike. Fast-forward 55 years from his initial diagnosis, the poster-child for never giving up had achieved many feats. Some of which include: co-discovering the four laws of black hole mechanics, development of the theory of Hawking radiation that is emitted by black hole, authoring a book A Brief History of Time, Knighted by the Queen of England and awarded the Gold Medal of the Royal Astronomical Society (also known as the RAS).
Though Hawking spent half of his lifespan introducing new scientific knowledge to younger generations, he always holds a pessimistic attitude against Artificial Intelligence in all public occasions. Unfortunately, it is impossible for him to further explain why opposing the development of AI. Is he right this time? Will his warning one day come true as his predictions of the black hole? In his criticism of artificial intelligence, could Stephen Hawking still be teaching us even after his death?
Hawking’s views, could be highlighted by statements such as, “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
For some, such statements can be draped in the thickest of irony because Stephen Hawking’s ability to interact with the world was through artificial intelligence. Through ‘the computer’, using a speech-generating device (SGD) or a voice output communication aid, Hawking was able to communicate with us. This is a unique device that either supplements or replaces speech/writing. In its purest form, the SGD was ahead of its time and took the world by storm – even so, many years down the line, the way in which the scientist was able to communicate was attributed to sorcery, to the ignorant mind, of course.
But what was the basis of his words when his legacy relied heavily on an intelligence form through technology? To a certain degree, we can very well attribute Stephen Hawking’s voice to that of a robot, because that is precisely how we will remember it – a voice we think our future robots will have.
The renowned scientist’s concerns may come from years of interaction with the machine. What makes his concerns essential is both his outstanding intelligence and decades of witnessing how machines ‘evolve’. Are we wary of the dangers of heavy reliance on technology? Are we the hosts or slaves to technology?
Machines nowadays store data, for everything – our interactions, our likes, dislikes and even preferences, and, depending on the quality of algorithms yet to be developed, they can supplement and ‘code’ feelings. Imagine a world where a machine knows how you are feeling (or should feel), and then plans your day based on the predetermined speculations – it would cause an expectation of ‘perfection’, as that is what machines are developed to do, to perform a given task seamlessly. What we may be creating, is a scenario in which we, as humans, have now got to live (or die) based on what the machines predict or expect.
Many scientists hold an optimistic prediction on AI because human beings design all algorithms and models in a machine, which can be rewritten or deleted if considered to be dangerous. However, if machines will be designing algorithms for other machines one day? Can we still fully supervise what is going on?
Some of the most potent points that Stephen Hawking raised about AI include:
“We don’t know how to control super-intelligent machines.”
Stephen Hawking warned that in the short-run, artificial intelligence and its effects depend on who is controlling it. Its long-run independence may cause a scenario where we can undo the mistakes we committed in the past, such as natural, political, and economical. But in the hands of the ones with hidden agendas, this could pose a deep threat to mankind.
“Unless we learn how to prepare for, and avoid the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”
Hawking warned that it is vital for humans to prepare their minds and emotions for the (negative) possibilities that AI may bring – once machines learn self-sustenance, what is left for the human race to offer the world?
Other Leaders’ Views
Hawking is not alone in his perceptions, other leaders, who are in the field of science and technology warn of the dangers of blindly developing artificial intelligence without caution. Bill Gates, one of the world’s most renowned minds has warned of the need for humans to focus on developing their minds to produce solutions to problems other than always turning to AI for help. In the long-run, this will create a scenario where we look to the machines we built for direction, instead of searching deep within ourselves – which is the essence of humanity.
Elon Musk is on the record stating that “There certainly will be job disruption. Because what’s going to happen is robots will be able to do everything better than us. … I mean all of us,” said Musk, speaking to the National Governors Association in July. “Yeah, I am not sure exactly what to do about this. This is really the scariest problem to me, I will tell you.”
For what it’s worth, the advocates for AI, like Mark Zuckerberg, all have positive things to say – but it is important to state that he lost about $3.5 Billion of his net worth recently due to a scandal in which, Facebook was using people’s information without consent to analyse behaviours for third-party gain. This alone may very well show the direction of the dangers in AI when it is in the wrong hands. However, we cannot deny the benefits we have enjoyed through the use of AI.
Social media has made it possible to keep up with the world regarding news, events and information. The development of online schools has been a step in the right direction for education, and on top of that, you may be reading this on your mobile phone, iPad, laptop or PC – imagine you being able to read this on TV through a blog channel, dedicated to these types of topics… imagine. But without caution to our ideas, judging them objectively as we go along, we may bring ourselves more terror than tranquillity. If the world’s most brilliant minds are all warning us on the possible dangers of this type of intelligence, would we be smart if we ignored them?
[Dedicated to Stephen Hawking: A great mind, a great soul, an unforgettable voice!]