Cuelogic Blog Icon
Cuelogic Career Icon
Home > Blog > Data And Machine Learning > Artificial Intelligence > Prologue to Artificial Intelligence

Prologue to Artificial Intelligence

Prologue to Artificial Intelligence

Origin of Artificial Intelligence

Before the invention of the computer, most experimental psychologists thought the brain was an unknowable black box. Its capacity to feel, make memories and give a thought behind action was thought as unparalleled. Therefore, that stuff remained beyond the reach of science.

So these behaviorists, as they called themselves, confined their work to the study of stimulus and response, feedback and reinforcement, bells and saliva. They gave up trying to understand the inner workings of the mind. They ruled their field for four decades.

Artificial intelligence

Then, in the mid-1950s, a group of rebellious psychologists, linguists, information theorists, and early artificial-intelligence researchers came up with a different conception of the mind. People, they argued, were not just collections of conditioned responses.

They absorbed information, processed it, and then acted upon it. They had systems for writing, storing, and recalling memories. They operated via a logical, formal syntax. The brain wasn’t a black box at all. It was more like a computer so-called cognitive revolution started small, and seed of artificial intelligence was thus sown.But before diving into artificial intelligence, let us understand the meaning of intelligence...

What is Intelligence?

Intelligence has been defined in many different ways including one's capacity for logic, abstract thought, understanding, self-awareness, communication, learning,emotional knowledge, memory, planning, creativity and problem solving. It can be more generally described as the ability to perceive information, and retain it as knowledge to be applied towards adaptive behaviors within an environment.

Intelligence is most widely studied in humans, but has also been observed in non-human animals and in plants. Artificial intelligence is intelligence in machines (such as software).

Implementing human properties to objects and abstract ideas is one of the ways people have been reasoning with their existence from the moment they acquired consciousness. AI has been derived from the same concept.

Roots of AI

Artificial Intelligence has its roots in the concept of automation. It is when man tried to hand over the mechanical tasks to the machines, AI started to emerge. The outset of modern AI is seen when philosophers started talking about ‘thinking’ as a symbolic system of reasoning.

Thomas Hobbes (Grandfather of AI), in 16th century came up with this concept of using symbols (numbers, graphs, calculations, statistics, etc.) as a substitute for longer/ complex expressions to solve problems. But its formal arrival is noted in 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term "artificial intelligence" was coined.

Marvin Minsky (MIT cognitive scientist) and other present in the conference thought of it with extreme optimism. Marvin Minsky quoted in his book AI: The Tumultuous Search for Artificial Intelligence: "Within a generation [...] the problem of creating 'artificial intelligence' will substantially be solved."

But achieving an artificially intelligent being wasn't so simple. After many obstacles, a firm scale to measure the intelligence of the machines was found by Alan Turing. He presented the most significant papers on machine intelligence, ‘Computing machinery and intelligence’ in 1950. However, it has stood up well to the test of time, and Turing’s approach remains universal.

The Turing Test

Turing did not provide definitions of machines and thinking, he just avoided semantic arguments by inventing a game, the Turing imitation game. Instead of asking, ‘Can machines think?’, Turing said we should ask, ‘Can machines pass a behaviour test for intelligence?’

He proposed a model that discussed about the probable aspects that can be computed. And to confirm it extends to the sphere of human brain, he created Turing Test. The objective of the test was to identify whether a machine can convince a suspicious interrogator that it was indeed a human being.

The test seemed to be quite simple – no complex assignments (such as creating original art, for example) were involved; in order to pass, the computer was to be able to make small talk with a human being and show understanding of the given context.

Phenomenal Qualities of Turing Test:

  1. Unbiased approach towards intelligence- Since the human and the machine interact via terminals, we get an objective view on intelligence there are rare chances to debate over the human nature of intelligence.
  2. Two in One- It can be conducted either as a two-phase game as just described, or even as a single phase game in which the interrogator needs to choose between the human and the machine from the beginning of the test. The interrogator is also free to ask any question in any field and can concentrate solely on the content of the answers provided.

But the rosy days soon vanished and slow progress in AI became a problem.Several reports criticized the progress of AI and the interest of government in this field dropped off, no funding was available for such projects.

Winters of AI

After the initial euphoria, reality struck the field hard. It soon became clear that anticipated time was half of what it is going to take to come up with some solid results. After ALPAC and Lighthill reports, which showed unsatisfactory advancement in the AI projects (problems with natural language software, slow advancements), the flux of investment was terminated.

First winter of AI began in 1974 and lasted till 1980s, but when Japanese started taking efforts in the same, competitive spirit revived the field and British Govt started funding again for AI. Due to the collapse of the general-purpose computers market and the decrease in funding, the second AI Winter emerged and lasted for five years (1987 to 1993).

Even in its lows, researchers kept their fire alive by continuing their work under different names such as, evolutionary programming, machine learning, speech recognition, data mining, industrial robotics, search engines and many other... which has become subcategories of the field.

The consistent efforts of the researchers took the state of AI from dormant to progressive...Some notable achievements include:

  1. IBM’s Deep Blue became the first computer to win a chess game against a chess champion – Garry Kasparov, in 1997.
  2. IBM’s question answering system Watson won the Jeopardy quiz against proficient opponents in 2011.
  3. Eugene Goostman, a chatbot persuaded a member of the Turing test jury that it was a 13-year-old boy from Ukraine in 2014. However, Eugene passed the bare minimum of conviction with 33%. Such a result is not considered to be a pass of the Turing test in essence because it relies mostly on the external condition (a child from a non-English speaking country can be forgiven for insufficiency in small talk, while an adult native speaker would not have been). In the course of the 2015, the developers of Eugene are expected to defend their victory and prove that they invented sentient software (which they most probably did not).


Over the course of the last fifty years, the artificial intelligence research field spurred immense features that are not conceived as AI by the general public. Most of our online endeavors include forms of AI (virtual agents, pattern recognition, targeted advertising). However, all that has been done so far is a mere grain of sand in reference to the predicaments for the sandy future.

Consequently, authorities have predict at least fifty more years of trial and error in order to emulate human intelligence. It is simply too broad and complex of a subject to be resolved in a short period of time. However, the advances that were made during the quest so far have influenced and shaped the world we live in greatly.

You may also want to read: Artificial Intelligence as a Futuristic Technology