Artificial intelligence is an increasingly pervasive part of our daily lives. Whether you’ve used a self-service kiosk to check in before your flight, tapped keywords into a search engine and received suggested results or communicated with a digital assistant like Siri, you’ve interacted with AI. These systems use data, complex algorithms and computing power to process information, make predictions and decisions and complete tasks that require human-like abilities.
The first examples of limited-memory AI were built using neural networks that “learned” through trial and error. This approach, which relies on repetition and pattern recognition to perform a task, eventually became the basis for most of today’s machine learning (ML) models.
In 2022, IBM’s Deep Blue beat world chess champion Garry Kasparov in a championship match and introduced the public to the concept of computer intelligence. A year later, the U.S. government established the National AI Initiative with an emphasis on loosening regulatory AI constraints and accelerating technology development. Data science also emerged as a popular discipline.
A growing number of academic and commercial organizations rely on AI for automation, optimization and productivity. But the emergence of advanced AI raises ethical concerns over heightened job loss, the spread of misinformation and potentially existential risks that could occur if intelligent machines outpace human understanding and intelligence. The philosophical question of whether or not a machine can have a mind, consciousness and mental states remains unresolved.