OpenAI’s “deep research” gives a preview of the AI agents of the future

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

OpenAI’s “deep research” gives a preview of the AI agents of the future

OpenAI announced this week its AI research assistant, which it calls “deep research.” Powered by OpenAI’s o3-mini model (which was trained to use trial and error to find answers to complex questions), deep research is one of OpenAI’s first attempts at a real “agent” that’s capable of following instructions and working on its own. 

OpenAI says deep research is built for people in fields like finance, science, policy, and engineering who need thorough, precise, and reliable research. It can also be useful for big-ticket purchases, like houses or cars. Because the model needs to spin a lot of cycles and tote around a lot of memory during its task, it uses a lot of computing power on an OpenAI server. That’s why only the company’s $200-per-month Pro users have access to the tool, and they’re limited to 100 searches per month. OpenAI was kind enough to grant me access for a week to try it out. I found a new “deep research” button just below the prompting window in ChatGPT. 

I first asked it to research all the nondrug products that claim to help people with low back pain. I was thinking about consumer tech gadgets, but I’d not specified that. So ChatGPT was unsure about the scope of my search (and, apparently, so was I), and it asked me if I wanted to include ergonomic furniture and posture correctors. The model researched the question for 6 minutes, cited 20 sources, and returned a 2,000-word essay on all the consumer back pain devices it could find on the internet. It discussed the relative values of heated vibration belts, contact pad systems, and Transcutaneous Electrical Nerve Stimulation (TENS) units. It even generated a grid that displayed all the details and pricing of 10 different devices. Not knowing a great deal about such devices, I couldn’t find any gaps in the information, or any suspect statements. 

I decided to try something a little harder. “I would like an executive overview of the current research into using artificial intelligence to find new cancer treatments or diagnostic tools,” I typed. “Please organize your answer so that the treatments that are most promising, and closest to being used on real patients, are given emphasis.”

Like DeepSeek’s R1 model and Google’s Gemini Advanced 2.0 Flash Thinking Experimental, OpenAI’s research tool also shows you its “chain of thought,” as it works toward a satisfying answer. While it searched it telegraphed its process: I’m working through AI’s integration in cancer diagnostics and treatment, covering imaging, pathology, genomics, and radiotherapy planning. Progressing towards a comprehensive understanding. OpenAI also makes a nice UX choice by putting this chain-of-thought flow in a separate pane at the right of the screen, instead of presenting it right on top of the research results. The only problem is, you only get one chance to see it, because it goes away after the agent finishes its research. 

I was surprised that OpenAI’s deep research tool used only 4 minutes to finish its work, and cited only 18 sources. It created a summary of how AI is being used in cancer research, citing specific studies that validated the AI in clinical settings. It discussed trends in using AI in reading medical imaging, finding cancer risk in genome data, AI-assisted surgery, drug discovery, and radiation therapy planning and dosing. However, I noticed that many of the studies and FDA approvals cited didn’t occur within the past 18 months. Some of the statements in the report sounded outdated: “Notably, several AI-driven tools are nearing real-world clinical use—with some already approved—particularly in diagnostics (imaging and pathology),” it stated, but AI diagnostic tools are already in clinical use. 

Before starting the research, I was aware of a new landmark study published two days ago in The Lancet medical journal about AI assisting doctors in reading mammograms (more on that below). The deep research report mentioned this same study, but it outlined preliminary results published in 2023, not the more recent results published this month.

I have full confidence in OpenAI’s deep research tool for doing product searches. I’m less confident, though, about scientific research, only because of the currency of the research it included in its report. It’s also possible that my search was overbroad, since AI is now being used on many fronts to fight cancer. And to be clear: Two searches certainly isn’t enough to pass judgement on deep research. The number and kinds of searches you can do is practically infinite, so I’ll be testing it more while I still have access. On the whole I’m impressed with OpenAI’s new tool—at the very least it gives you a framework and some sources and ideas to start you off on your own research.

AI is working alongside doctors on early breast cancer detection

A study of more than 100,000 breast images from mammography screenings in Sweden found that when an AI system assisted single doctors in reviewing mammograms, positive detections of cancer increased by 29%. The screenings were coordinated as part of the Swedish national screening program and performed at four screening sites in southwest Sweden. 

The AI system, called Transpara, was developed by ScreenPoint Medical in the Netherlands. Normally, two doctors review mammograms together. When AI steps in for one of them, overall screen reading time drops by 44.2%, saving lots of time for oncologists. The AI makes no decisions; it merely points out potential problem spots in the image and assigns a risk score. The human doctor then decides how to proceed. With a nearly 30% improvement in early detections of cancer, the AI is quite literally saving lives. Healthcare providers have been using AI image recognition systems in diagnostics since 2017, and with success, but the results of large scale studies are only now beginning to appear. 

Google touts the profitability of its AI search ads

Alphabet announced its quarterly results earlier this week and hidden among the other results was some good news about Google’s AI search results (called AI Overviews). Some observers feared that Google would struggle to find ad formats that brands like within the new AI results, or that ads around the AI results would cannibalize Google’s regular search ads business. But Google may have found the right formats already, because the AI ads are selling well and are profitable, analysts say. “We were particularly impressed by the firm’s commentary on AI Overviews monetization, which is approximately at par with traditional search monetization despite its launch just a few months ago,” says Morningstar equity analyst Malik Ahmed Khan in a research brief. 

Khan says Google’s AI investments paid off in the company’s revamped Shopping section within Google Search, which was upgraded last quarter with AI. The Shopping segment yielded 13% more daily active U.S. users in December 2024 compared with the same month a year earlier. Google also says that younger people who are attracted to AI Overviews end up using regular Google Search more, with their usage increasing over time. “This dynamic of AI Overviews being additive to Google Search stands at odds with the market narrative of generative AI being the death knell for traditional search,” Khan says.

Google also announced that it intends to spend $75 billion in capital expenditures during 2025, much of which will go toward new cloud capacity and AI infrastructure.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

https://www.fastcompany.com/91273605/openais-deep-research-gives-a-preview-of-the-ai-agents-of-the-future?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 3mo | 6 févr. 2025, 18:40:10


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

CrowdStrike lays off 500 workers despite reaffirming a strong 2026 outlook

CrowdStrike reiterated its fiscal 2026 first quarter and annual forecast

7 mai 2025, 19:40:05 | Fast company - tech
Apple eyes AI-powered search as Safari usage declines

Apple is considering reworking its Safari web browser across its devices to place a greater emphasis on AI-powered search engines, Bloomberg

7 mai 2025, 19:40:04 | Fast company - tech
‘The school has to be evacuated’: Connecticut students are setting their Chromebooks on fire for TikTok

The latest TikTok trend is leading to fire evacuations at schools across Connecticut.

As part of the trend, students are filming themselves inserting items such as pencils, paper clips,

7 mai 2025, 17:20:03 | Fast company - tech
Netflix is getting a big TV redesign and AI search

Netflix is finally pushing out the major TV app redesign it started testing last year, with a top navigation bar and new recommendation features. It’s also experimenting with generative AI a

7 mai 2025, 14:50:06 | Fast company - tech
LinkedIn’s new AI tools help job seekers find smarter career fits

New AI features from LinkedIn will soon help job seekers find positions that best suit them—without the n

7 mai 2025, 14:50:05 | Fast company - tech
Meta AI ‘personalized’ chatbot revives privacy fears

As the arms race in the artificial intelligence world ramps up, Big Tech companies are rushing to become your default AI source. Meta, last week, launched the Meta AI app to challenge ChatGPT and

7 mai 2025, 12:40:03 | Fast company - tech
Elon Musk’s new city puts SpaceX in the driver’s seat. Could public services be at risk?

Residents living near SpaceX headquarters in Boca Chica, Texas, will soon have a new public body through which to raise concerns about everything from road maintenance to garbage collection. Earli

7 mai 2025, 12:40:02 | Fast company - tech