AI still does not understand negation: a dangerous error revealed mit


11:00 ▪ ▪
3
min at reading ▪
FENELON L.

Research by Massachusets Institute of Technology shows the main mistake of artificial intelligence (AI): its inability to properly understand negation. This gap could have dramatic consequences in critical sectors such as health.

The researcher gave a blouse and pointed to his finger with the expression of the discovery, the posture leaned forward, concentrated and revealed the appearance. Computer screen with IA with Ia with "NO" misinterpreted

In short

  • Modern AI systematically does not understand the words “no” and “no”.
  • This failure is the main risks in the medical and legal areas.
  • The problem lies in the training method based on the association rather than logical reasoning.

AI still does not understand the words “no” and “no”

The research team led by Kumail Alhamoud, a doctoral student at MIT, conducted this study in cooperation with Openi and the University of Oxford.

Their work reveals a worrying defect: The most advanced AI systems systematically fail to face negation. Models famous as Chatgpt, Gemini and Llama are constantly preferred by positive associations and ignore negation conditions, but explicit.

The medical sector illustrates this problem perfectly. When a radiologist writes a message mentioning “no fracture” or “without extension”, AI can interpret this essential information.

This confusion could cause diagnostic errors with potentially fatal consequences for patients.

The situation deteriorates with visual language models, with these hybrid systems that together analyze images and texts. These technologies show an even more pronounced bias to positive terms.

They often fail to distinguish the positive descriptions of the negatives and multiply the risk of errors in the AI ​​assisted medical display.

Problem with training, no data

Franklin Delehelle, a research engineer at Lagrange Labs, explains that the heart of the problem is not in the lack of data. The current models are excellent for reproducing answers similar to their training, but are trying to generate truly new answers.

Kian Katanforoosh, a professor in Stanford, specifies that linguistic models work by association, not logical thinking. When they meet “not well”, they automatically associate “well” with a positive feeling and ignore negation.

This approach creates fine but critical, especially dangerous mistakes in legal, medical or human sources. Unlike humans, AI cannot overcome these automatic associations.

Scientists are exploring promising compositions with data on synthetic negation. However, Katanforoosh emphasizes that simply increasing training data is not enough.

The solution consists in developing models capable of logical reasoning, combining statistical learning and structured thinking. This development is the main challenge of modern artificial intelligence.

Maximize your Cointribne experience with our “Read to Earn” program! For each article you read, get points and approach exclusive rewards. Sign up now and start to accumulate benefits.

Fenelon L. AvatarFenelon L. Avatar

FENELON L.

Passionate Bitcoin, I like to explore meanders blockchain and cryptos and share my discoveries with the community. My dream is to live in a world where privacy and financial freedom is guaranteed for everyone, and I firmly believe that Bitcoin is a tool that can make it possible.

Renunciation

The words and opinions expressed in this article are involved only by their author and should not be considered investment counseling. Do your own research before any investment decision.

Leave a Reply