What is Artificial Intelligence (AI)?
Artificial intelligence (AI) refers to a broader branch of computer science and engineering correlated to developing intelligent machines. AI machines are capable of simulating human intelligence. These machines have intelligent computer programs, which enable them to think like humans and emulate human actions. AI technology revolves around three core intellectual skills of intelligence such as learning, reasoning, and self-correction.
From a technological perspective, contemporary artificial intelligence technology works as a combination of following modern technologies such as machine learning, deep learning, speech recognition, Text analytics, natural language processing (NLP), pattern recognition, computer vision, image processing, robotics, neural networks, virtual agents and other similar technologies.
Brief History of Artificial Intelligence
The phrase artificial intelligence is not entirely new for technology researchers. The idea of intelligent machines, as a matter of fact, can be traced back to the Ancient Greeks and Egyptian myths.
The journey of modern AI technology has its roots back in classical philosophers, logicians, and mathematicians’ efforts that describe the human thought process as a manipulation of symbols. This assumption resulted in the invention of the programmable digital computer in the 1940s, the Atanasoff Berry Computer (ABC). This particular invention encouraged scientists to undertake an initiative drafting an “electronic brain,” or an artificial intellectual being. Then in 1943, Warren McCulloch and Walter Harry Pitts Jr. suggested a model of artificial neurons that in this day and age is viewed as the very first work of artificial intelligence (AI).
By the 1950s, a generation of philosophers, mathematicians, and scientists emerged who meaningfully internalized the concept behind artificial intelligence machines. Alan Turing was one among those academics; he was a British Polymath, Cryptanalyst who spearheaded Machine learning in 1950. Turing argued that humans utilize available information and apply reasonable logic for making decisions and solving problems. So, why machines can’t be employed to reach the same goal? From this logical perspective, he published a paper in 1950 titled “Computing Machinery and Intelligence” where he deliberated upon how to chart out an intelligent machine and proposed necessary procedures to test their intelligence. Later the test procedures became known as the “Turing Test”.
Artificial intelligence as an academic discipline was officially founded in 1956 when John McCarthy, the inventor of LISP programming language, coined the phrase "Artificial Intelligence” at a conference titled the "Dartmouth Summer Research Project on Artificial Intelligence" which he hosted jointly with Marvin Minsky.
Subsequently, in the 1960s and 1970s, researchers and scientists had started to employ computers in AI research leading to the emergence of AI subfields such as machine learning, deep learning, forecasting analytics, data science, and other similar fields. Since then, research in artificial intelligence and technological development along various fronts has been advancing rapidly.

Representative image: Theory of mind AI
Basic Components of AI System
Knowledge representation- The term knowledge representation in artificial intelligence pertains to providing real-world information to machines. These data are such that machines can interpret them and deploy them for making intelligent decisions. Knowledge representation even enables AI agents to learn from stored data and perform tasks as intelligent beings. Therefore, an AI agent debates humans, deliberates on complex issues, writes poetry, understands emotions, makes autonomous movements, speaks with humans, etc. There are several categories of knowledge that need to be made part of the AI system for example objects, events, performance, facts, meta-knowledge, knowledge-based, etc.
Automated Reasoning- Reasoning refers to making an inference. Automated reasoning falls within the subject of computer science that pertains to developing computing systems that can automate the reasoning process through applying logic. In an automated reasoning system, an algorithmic description of a mathematical expression is loaded to a computer that executes the algorithm to ascertain theories and equations meaningfully.
Self-Learning- A core constituent unit of a self-learning system can self-train and learn using un-syntax data. At a machine level, it performs analysis of a databank, recognizes patterns, makes an inference, takes decisions, implements, eventually enriches its intelligence over time. A self-learning system is a software that employs technologies such as machine learning, text interpretation, pattern recognition, speech recognition, evolutionary computation, processing image, audio, video data, and so on.
Natural Language Understanding (NLU) - Technology and consulting firm Gartner, Inc. defined Natural Language Understanding (NLU) as the ability of computers to comprehend human language to the level that they can directly interact with humans like an intelligent being. It implies that NLU uses computer programs to interpret unstructured data (e.g., text, speech) without requiring formal syntax of programming languages. So NLU is a subsector of natural language processing (NLP) within AI that understands commands in human languages (e.g., Arabic, Bengali, English, Chinese), converts them into computer language, produces corresponding output, then responds in human language.
Related Reading