HomeBusiness‘This will be a stressful job’: Sam Altman offers $555k salary to...

‘This will be a stressful job’: Sam Altman offers $555k salary to fill most daunting role in AI | Artificial intelligence (AI)


The maker of ChatGPT has advertised a $555,000-a-year vacancy with a daunting job description that would cause Superman to take a sharp intake of breath.

In what may be close to the impossible job, the “head of preparedness” at OpenAI will be directly responsible for defending against risks from ever more powerful AIs to human mental health, cybersecurity and biological weapons.

That is before the successful candidate has to start worrying about the possibility that AIs may soon begin training themselves amid fears from some experts they could “turn against us”.

“This will be a stressful job, and you’ll jump into the deep end pretty much immediately,” said Sam Altman, the chief executive of the San Francisco-based organisation, as he launched the hunt to fill “a critical role” to “help the world”.

The successful candidate will be responsible for evaluating and mitigating emerging threats and “tracking and preparing for frontier capabilities that create new risks of severe harm”. Some previous executives in the post have lasted only for short periods.

The opening comes against a backbeat of warnings from inside the AI industry about the risks of the increasingly capable technology. On Monday, Mustafa Suleyman, the chief executive of Microsoft AI, told BBC Radio 4’s Today programme: “I honestly think that if you’re not a little bit afraid at this moment, then you’re not paying attention.”

Demis Hassabis, the Nobel prize-winning co-founder of Google DeepMind, this month warned of risks that included AIs going “off the rails in some way that harms humanity”.

Amid resistance from Donald Trump’s White House, there is little regulation of AI at national or international level. Yoshua Bengio, a computer scientist known as one of the “godfathers of AI”, said recently: “A sandwich has more regulation than AI.” The result is that AI companies are largely regulating themselves.

Altman said on X as he launched the job search: “We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits. These questions are hard and there is little precedent.”

One user responded sardonically: “Sounds pretty chill, is there vacation included?”

What is included is an unspecified slice of equity in OpenAI, a company that has been valued at $500bn.

Last month, the rival company Anthropic reported the first AI-enabled cyber-attacks in which artificial intelligence acted largely autonomously under the supervision of suspected Chinese state actors to successfully hack and access targets’ internal data. This month, OpenAI said its latest model was almost three times better at hacking than three months earlier and said “we expect that upcoming AI models will continue on this trajectory”.

OpenAI is also defending a lawsuit from the family of Adam Raine, a 16-year-old from California who killed himself after alleged encouragement from ChatGPT. It has argued Raine misused the technology. Another case, filed this month, claims ChatGPT encouraged the paranoid delusions of a 56-year-old in Connecticut, Stein-Erik Soelberg, who then murdered his 83-year old mother and killed himself.

An OpenAI spokesperson said it was reviewing the filings in the Soelberg case, which it described as “incredibly heartbreaking” and that it was improving ChatGPT’s training “to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support”.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img