AI. (Shutterstock) (Shutterstock)
AI

Related:

Researchers found that while AI can analyze and predict the reliability of information, it still lags behind humans in nuanced tasks like evaluating personal memories.

By Pesach Benson, TPS

Human memory plays a critical role in everyday decision-making.

Whether it’s a police officer gathering an eyewitness account, a doctor diagnosing a patient, or a friend recommending a restaurant, the memories of others inform our actions in significant ways.

Yet, memory is fallible — prone to forgetting and distortion.

With this challenge in mind, a team of Israeli researchers at Ben-Gurion University of the Negev explored how humans assess the reliability of memories shared by others and how these processes compare to machine learning models.

They found that while AI can analyze and predict the reliability of information, it still lags behind humans in nuanced tasks like evaluating personal memories.

The researchers also found machines are getting better at identifying patterns that signal accurate memories, which could lead to memory-augmenting tools.

Years of research have shown that memory is not an accurate record of past events. Instead, it is a reconstructive process, susceptible to errors even after short periods.

This raises the dilemma of how people can base their knowledge and decisions on something as unreliable as memory.

The researchers, led by Dr. Talya Sadeh from Ben-Gurion University’s Department of Cognitive and Brain Sciences focused on understanding how humans recognize the accuracy of others’ memories and whether AI models, like those used in natural language processing (NLP), could assist in identifying the truth of shared memories.

The Ben-Gurion team’s findings were recently published in the peer-reviewed PNAS journal.

“A significant portion of people’s knowledge is derived from sharing episodic memories with one another,” Sadeh explained.

“This knowledge influences decisions, opinions, and even beliefs. My research aimed to understand how we manage to rely on memories that aren’t always reliable—and whether language models like ChatGPT could help us evaluate the truthfulness of these memories.”

To investigate, Dr. Sadeh and her team designed an experiment simulating real-life situations where participants were asked to judge the accuracy of another person’s memory.

For instance, they might hear someone say, “I remember that a car didn’t stop at a red light because I saw its speed before it reached the intersection.”

Based on such descriptions, participants were asked to decide whether the memory was accurate.

In the next phase, participants assessed the quality of the memory by scoring how vivid, detailed, and confident the memory sounded.

By doing so, they could indirectly judge how reliable the memory was.

To compare human memory assessment with AI capabilities, the researchers used machine learning models trained on language patterns to identify words and phrases most indicative of a correct or incorrect memory.

Surprisingly, the results showed that both humans and the AI identified 14 out of the 20 most relevant words for judging memory accuracy.

“Humans appear to use much of the same information as the machine learning model to judge whether a memory is true or false,” noted Sadeh.

“However, the study also revealed that humans have an edge. When participants evaluated the qualities of the memory, such as vividness and confidence, they were 10% more accurate in judging its reliability than when they simply relied on a direct judgment of truthfulness.”

While AI models extract statistical patterns from the words in memory descriptions, humans rely on their sensitivity to sensory experiences and emotions conveyed by the memory sharer.

This difference explains why humans are often better at evaluating the reliability of memories than machines, despite the growing capabilities of language models.

“Our ability to use language to convey thoughts, feelings, and beliefs—along with the lifelong experience of learning from others—allows humans to validate shared memories in ways that machines, which operate on statistical rules, cannot yet match,” Sadeh said.

Tools to augment memory could be used by police and court officials to assess the reliability of a witness’s testimony, doctors who rely on patients to describe symptoms, students who share knowledge based on what they remember from lessons, and better inform therapy for individuals who struggle with memory issues or distorted recollections, for example.

“Humans are social creatures, and much of what we know comes from shared experiences. We’ve shown that machines can’t yet replace human intuition and understanding when it comes to personal memories,” said Sadeh.

Do You Love Israel? Make a Donation - Show Your Support!

Donate to vital charities that help protect Israeli citizens and inspire millions around the world to support Israel too!

Now more than ever, Israel needs your help to fight and win the war -- including on the battlefield of public opinion.

Antisemitism, anti-Israel bias and boycotts are out of control. Israel's enemies are inciting terror and violence against innocent Israelis and Jews around the world. Help us fight back!

STAND WTH ISRAEL - MAKE A DONATION TODAY!