As Silicon Valley and other tech giants all over the world continue to invest massive amounts of cash into the burgeoning AI field that’s paving the way for the development of such things like machine learning and chatbots, the comparatively budding AI industry is booming more than ever before.
As the latest and newest ticket in town, the beneficiary of online delivery apps, anything and everything hailing services, as well as virtual reality technology, the field of AI has grown by leaps and bounds from the ‘80s.
But things weren’t always this way. Artificial intelligence went from being the stuff of science fiction and an academic subject to an unobtrusive day to day companion. You see, AI technology has been around as far back as the 1500s but it wasn’t until the 1980s that people really started paying attention.
The very first indexing algorithm, AltaVista, was created back in 1995. However, it was not until 2010 that Google slowly rolled in personalized search results for searches. What was originally just chatter among hopeful and enthusiastic computer engineers has today become an accepted and inescapable component of modern life.
But how did all this come to be? Here is an examination of Artificial Intelligence in the ‘80s”
Throw it back to the AI winter
According to this article from Digital Authority, by the 1970s, the world had already been introduced to out of this world AI possibilities such as Shakey the Robot. Shakey was the 1st general-purpose mobile robot capable of making decisions based on its surroundings.
Though the world was definitely not adequately prepared for Shakey, it was painfully slow and experienced many challenges. Fast forward to 1973 and the world had not been introduced to anything of the sort again, despite millions having been spent.
As a result, the AI field was experiencing a lot of pressure and criticism from the US Congress. In 1973, well-known mathematician Professor James Lighthill delivered a health report that seemed to signal the death of the industry.
According to Lighthill, robots would only ever be able to play chess like an experienced human amateur. He also claimed that things like facial recognition would never come to be because such tasks were beyond the capability of machines. Thanks to Lighthill’s report, AI funding was reduced considerably, paving the way for what became known in AI circles as the AI winter.
Things take a turn in the 80s
It wasn’t until the 80s, that investors started to become in the field of AI again, after AI’s commercial value was finally realized, which helped to attract investments. The first commercial application of AI in the 80s was an expert system known as the RI.
The RI was utilized by the Digital Equipment Corporation for configuring orders for new computer systems. By 1986, the RI was so efficient that it was saving the organization a whopping $40m each year.
The RB5X is created
In 1984, a robot known as the RB5X capable of learning from experience was created. The RB5X was a cylinder-shaped robot outfitted with an optional arm that had a transparent, dome-shaped top. Like a person, the RB5X was made up of a collection of organs and nerves that worked together to allow the robot to function and become more than the total of its components.
Utilizing self-learning software, the RB5X would progress from simple and random responses to finally being able to predict future events in its environment based on its analysis of its past experiences.
Virtual reality becomes a thing
The RB5X was a smashing success and just a couple of years later, the term virtual reality was coined with the first sales of VR glasses and gloves. The term was coined by Jaron Lanier in 1987 who was at the time the head of VPL research, the company that pioneered virtual reality research and 3D graphics.
It was VPL Research that first introduced virtual reality gear like glasses, data gloves, and much later, a full data suit. Around this same time in 1989, hyperlinks and hypertext were invented by Sir Tim Berners-Lee, which is what he used to invent the world wide web. The phrase augmented reality was not used until 1990 after it was coined by Tom Caudell.
Computerized automation begins
At the end of the 80s, computerized automation began taking effect. This is what would mark the start of the development of AI digital programs instead of primarily just bots. The 90s began with an AI known as Deep Blue, the AI that is famed for defeating chess grandmaster Kasparov.
NASA also deployed its first autonomous robotics systems, known as Sojourner, on the surface of Mars. By this time, web crawlers and other AI-inspired data extraction programs were already being used extensively for the worldwide web.
Over the years, AI and robotic process automation continued to become more refined and efficient. Today, automation software has become a necessity and not a luxury like it was in the early days. AI is being used widely for a wide range of applications in numerous industries. And although Siri may not be perfect, where would we be without her?
The world’s obsession with AI is almost palpable. It seems as though machine learning startups are cropping up with each new week. And well-known social sites such as Facebook and Pinterest are constantly announcing news of newly rolled out features that make use of AI to improve users’ experience.
Today, we utilize machine learning for all sorts of things such as making subjective decisions like those that have life-changing consequences. Medical applications are talked about often, but they are some of the least controversial applications of AI.
By the time the 90s were rolling in, AI was being utilized for more controversial uses such as controlling power grids and even killing civilians in war-torn regions. A lot has changed over the years and the sheer scope of these AI applications is only going to expand in the future as AI technology continues to become more commonplace.