Artificial Intelligence (AI)
Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
What is artificial intelligence?
While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004 paper (PDF, 106 KB) (link resides outside IBM), ” It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”
However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing’s seminal work, “Computing Machinery and Intelligence” (PDF, 89.8 KB) (link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the “father of computer science”, asks the following question, “Can machines think?” From there, he offers a test, now famously known as the “Turing Test”, where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.
Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach (link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:
Human approach:
- Systems that think like humans
- Systems that act like humans
Ideal approach:
- Systems that think rationally
- Systems that act rationally
Alan Turing’s definition would have fallen under the category of “systems that act like humans.”
At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.
Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted in Gartner’s hype cycle (link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow “a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovation’s relevance and role in a market or domain.” As Lex Fridman notes here (01:08:15) (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.
As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation around AI ethics, read more here.
Resource : https://www.ibm.com/cloud/learn/what-is-artificial-intelligence