Philosophy is crucial in the age of AI

<span klasse=shutterstock mapman/Shutterstock” src=”https://s.yimg.com/ny/api/res/1.2/SbumCJFePRnQU7nIdLavWg–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MQ–/https://media.zenfs.com/en/the_conversation_464/f80a0bbd35ac194af1 cb3a3ed0b1519d” data-src=”https://s.yimg.com/ny/api/res/1.2/SbumCJFePRnQU7nIdLavWg–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MQ–/https://media.zenfs.com/en/the_conversation_464/f80a0bbd35ac194af1c b3a3ed0b1519d”/>

New scientific insights and engineering techniques have always impressed and terrified. They will undoubtedly continue to do so. OpenAI recently announced that it expects “superintelligence” – AI that surpasses human capabilities – to emerge this decade. It is therefore assembling a new team and devoting 20% ​​of its computing resources to ensuring that the behavior of such AI systems will be in line with human values.

It seems they don’t want artificial superintelligences to wage war on humanity, as in James Cameron’s 1984 sci-fi thriller The Terminator (ominously, Arnold Schwarzenegger’s Terminator is sent back in time from 2029). OpenAI is enlisting top machine learning researchers and engineers to help them tackle the problem.

But can philosophers contribute anything? And more generally, what can we expect from the age-old discipline in the new technologically advanced era that is now emerging?

To answer this, it’s worth emphasizing that philosophy has played an important role in AI since its inception. One of the first AI success stories was a 1956 computer program called the Logic Theorist, created by Allen Newell and Herbert Simon. Its task was to prove theorems using theorems from Principia Mathematica, a three-volume work from 1910 by philosophers Alfred North Whitehead and Bertrand Russell that aimed to reconstruct all of mathematics on a single logical basis.

The early emphasis on logic in AI was largely due to the fundamental debates of mathematicians and philosophers.

An important step was the development of modern logic by the German philosopher Gottlob Frege in the late 19th century. Frege introduced the use of quantifiable variables – rather than objects such as people – into logic. His approach made it possible not only to say, for example, “Joe Biden is president,” but also to systematically express such general thoughts as that “there exists an X such that X is president,” where “there exists” is a quantifier, and “X” is a variable.

Other important contributors in the 1930s were the Austrian-born logician Kurt Gödel, whose theorems on completeness and incompleteness concern the limits of what can be proven, and the Polish logician Alfred Tarski’s “proof of the indefinability of truth”. The latter showed that “truth” in a standard formal system cannot be defined within that particular system, so that arithmetic truth, for example, cannot be defined within the system of arithmetic.

Finally, in 1936, British pioneer Alan Turing built on this development by developing an abstract idea for a computing machine. This idea had a huge impact on early artificial intelligence (AI).

However, one could argue that even if such good old-fashioned symbolic AI was indebted to high philosophy and logic, “second wave” AI, based on deep learning, emerges more from the concrete technical feats involved in processing massive amounts of data.

Yet philosophy has played a role here, too. Consider large language models, such as those used by ChatGPT, which produce conversational text. They are enormous models, with billions or even trillions of parameters, trained on enormous datasets (typically covering much of the internet). But at their core, they track—and exploit—statistical patterns of language use. Something very similar to this idea was articulated by the Austrian philosopher Ludwig Wittgenstein in the mid-20th century: “the meaning of a word,” he said, “is its use in language.”

But contemporary philosophy, and not just its history, is relevant to AI and its development. Could an LLM actually understand the language it processes? Could it achieve consciousness? These are deeply philosophical questions.

Science has so far failed to fully explain how consciousness arises from the cells in the human brain. Some philosophers even believe that this is such a “hard problem” that it is beyond the reach of science and may need a helping hand from philosophy.

In a similar vein, we might ask whether an image-generating AI can be truly creative. Margaret Boden, a British cognitive scientist and philosopher of AI, argues that while AI can generate new ideas, it will struggle to evaluate them in the way that creative humans do.

She also expects that only a hybrid (neural-symbolic) architecture – one that uses both logic techniques and deep learning from data – will achieve artificial general intelligence.

Human values

Returning to OpenAI’s announcement, when ChatGPT asked us about the role of philosophy in the age of AI, the company suggested to us that (among other things) it “helps ensure that the development and use of AI are aligned with human values.”

In this spirit, we might suggest that if AI alignment is the serious problem that OpenAI thinks it is, it is not just a technical problem to be solved by engineers or tech companies, but also a social problem. That requires input from philosophers, but also from social scientists, lawyers, policymakers, citizen users, and others.

Apple Park is the headquarters of Apple Inc. in Silicon Valley,

Indeed, many people are concerned about the growing power and influence of tech companies and their impact on democracy. Some argue that we need a whole new way of thinking about AI – one that takes into account the underlying systems that support the industry. British lawyer and author Jamie Susskind, for example, has argued that it’s time to build a “digital republic” – one that ultimately rejects the political and economic system that has given tech companies so much influence.

Finally, let’s briefly ask how AI will affect philosophy. Formal logic in philosophy actually dates back to the work of Aristotle in antiquity. In the 17th century, the German philosopher Gottfried Leibniz suggested that we might one day have a “calculus ratiocinator” — a calculating machine that would help us find answers to philosophical and scientific questions in a quasi-oracular way.

Perhaps we are now beginning to realize that vision, with some authors advocating a “computational philosophy” that literally encodes assumptions and derives consequences from them. This ultimately allows for factual and/or value-based judgments of outcomes.

For example, the PolyGraphs project simulates the effects of information sharing on social media. This can then be used to computationally answer questions about how we should form our opinions.

Advances in AI have certainly given philosophers food for thought. Perhaps they already provide answers.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Brian Ball receives funding from the British Academy and has previously been supported by the Royal Society, the Royal Academy of Engineering and the Leverhulme Trust.

Anthony Grayling is not an employee of, an adviser to, an owner of shares in, or a recipient of funding from, any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond his academic appointment.

Leave a Comment