OpenAI has added retired U.S. Army General and National Security Agency Director Paul Nakasone to its board of directors. It’s latest move at the artificial intelligence firm that’s been dealing with continued reshuffling since CEO Sam Altman was temporarily ousted last fall, including a number of recent high-profile departures.
Nakasone will also join the OpenAI Board’s Safety and Security Committee, a new group that it says is “responsible for making recommendations to the full Board on critical safety and security decisions for all OpenAI projects and operations.”
Here’s what to know about Nakasone:
Nakasone was a career Army officer
His interest in the digital age was sparked in the post-9/11 era, according to a 2020 profile in Wired. He served in both command and staff positions across all levels of the Army with assignments with cyber units in the U.S., Korea, Iraq, and Afghanistan.
He was a Trump appointee
Former President Donald Trump in 2018 tapped Nakasone to lead the National Security Agency and U.S. Cyber Command. He came into the role as morale at the NSA was reportedly suffering amid a series of leaks regarding the agency’s secret hacking tools.
Much of Nakasone’s time spent leading Cyber Command involved countering foreign efforts to meddle in American elections. Nakasone created a so-called Russia Small Group, consisting of experts within Cyber Command and the NSA, to home in on Russia’s attempts to interfere in elections.
He ended up as the longest serving leader of the U.S. Army Cyber Command. Gen. Timothy Haugh took the lead in February.
He’s well-respected in D.C.
Nakasone has long been widely responded throughout the cybersecurity and military communities. There’s nobody in the security community, broadly, that’s more respected,” Sen. Mark Warner (D-Va.) told Axios.
That Washington experience will likely be tremendously beneficial to OpenAI as the company works to build public trust over its ability to safely build toward a goal of super intelligence.
Nakasone also arrives at a time when OpenAI is under heightened scrutiny over its artificial intelligence systems and the safeguards in place. That concern was amplified recently, after a handful of current and former employees signed a public letter warning that the technology poses risks to humanity. “AI companies have strong financial incentives to avoid effective oversight,” the letter reads, “and we do not believe bespoke structures of corporate governance are sufficient to change this.”
Cofounder Ilya Sutskever, who helped lead a safety team that worked to ensure artificial general intelligence didn’t turn on humans, left the company in May. Jan Leike, the team’s other leader, also quit and shared a lengthy thread on X that criticized the company and leadership.
“Artificial Intelligence has the potential to have huge positive impacts on people’s lives, but it can only meet this potential if these innovations are securely built and deployed,“ OpenAI Board Chair Bret Taylor said in a statement. “General Nakasone’s unparalleled experience in areas like cybersecurity will help guide OpenAI in achieving its mission of ensuring artificial general intelligence benefits all of humanity.”
Melden Sie sich an, um einen Kommentar hinzuzufügen
Andere Beiträge in dieser Gruppe

OpenAI launched a research preview on Friday of what it’s calling its most capable AI coding agent yet.
Codex, a cloud-based sof

For NFL teams’ social media departments, May 14 is the Super Bowl.
NFL Schedule Release Day has become an unofficial holiday on the league calendar. All 32 teams unveil their season sche

Switch, PS5, and XBox might be the biggest names in video games, but David Lee and a group of entrepreneurial alums from companies like Apple, Google, Microsoft, and Meta are carving out a niche m

The internet wouldn’t be the same without the Like button, the thumbs-

It has been an odd few weeks for generative AI systems, with ChatGPT suddenly turning sycophan

Cory Joseph has been blind since birth. So he’s among the people Apple aims to serve with an addition to its App Store called “Accessibility Nutrition Labels,” one of a raft of features the compan

Adam Becker is a science journalist and astrophysicist. He has written for The New York Times, BBC, NPR, Scientific American, New Scientist, Quanta, Undark