Despite ever-growing interest in AI tools and assistants, it’s worth remembering that they’re still quite limited with numerous shortcomings. They are not as smart as they might seem on the surface. Case in point, ChatGPT is pretty useless when it comes to playing chess.
As reported by Futurism, ChatGPT lost a chess game against the classic Atari 2600 gaming console. Robert Caruso, an engineer at Citrix, organised the game between the AI and a simple 1977 chess program released for the Atari 2600.
During the game, ChatGPT made a series of embarrassing mistakes, misread moves, and kept losing track of its own pieces. “ChatGPT got absolutely wrecked on the beginner level,” Caruso wrote. In the end, the AI chatbot simply gave up and lost.
The fiasco is an important reminder that LLMs—even the “reasoning” ones—are still just language prediction models at their core. It’s clear evidence that dedicated tools that are trained or coded for a specific purpose, like the Atari 2600’s chess program, are still better than AI assistants that are supposed to do “everything.” This should be an important lesson for tech companies like OpenAI and users who rely on AI tools.
Jelentkezéshez jelentkezzen be
EGYÉB POSTS Ebben a csoportban


AMD’s hardware teams have tried to redefine AI inferencing with power

AI-generated summaries are low-hanging fruit for apps and services th

Flash drives are kind of dull as a topic, the modern descendant of fl


Your PC can now see what you’re looking at on the screen, if you choo
