HomeAI in EducationOpenAI launches grant to fund external AI security research – THE Journal

OpenAI launches grant to fund external AI security research – THE Journal

OpenAI Launches Grant to Fund External AI Security Research

OpenAI is expanding its safety efforts beyond its borders with a Safety Fellowship program that funds external researchers to study AI risks. The OpenAI Safety Fellowship will run for six months from September 2026 to February 2027, according to a press release, expanding the company’s involvement in alignment and safety work. The initiative comes at a time when AI companies are under increasing scrutiny for how they manage the risks associated with rapidly evolving systems.

Open to Diverse Participants

The program is open to researchers, engineers, and practitioners outside the company. Participants receive grants, access to OpenAI models, and technical support to conduct research in areas such as robustness, privacy, agent monitoring, and abuse prevention. Fellows are expected to produce deliverables such as research papers, benchmarks, or datasets.

A Broader Trend in AI Research Funding

OpenAI said the grant is intended to “support high-impact research on the security and targeting of advanced AI systems” and increase the number of people working on technical security challenges. The program reflects a broader trend among major AI developers to fund external research through grants, residencies, and academic partnerships.

For example, Anthropic, a competing AI company focused on security, runs a similar grant program that supports independent researchers working on alignment, interpretability, and AI security. The program provides funding, mentorship, and computational resources, with participants typically producing publicly available research.

Google and its DeepMind unit operate a number of student research and fellowship programs that place participants on research teams for several months. These programs cover a wide range of AI topics, including safety-related work, although they are not always explicitly identified as being targeted.

Microsoft and Meta have also expanded funding for external AI research through academic partnerships, grants, and residency programs, often aimed at advancing work on responsible AI and system reliability.

Focus on Agent Oversight and Abuse Prevention

Together, these initiatives form a growing ecosystem of externally funded research connected to leading AI labs.

OpenAI said the priority areas of its grant include “agent oversight” and “high-severity abuse domains,” reflecting concerns about systems capable of performing multi-stage actions with limited human intervention. Recent advances in AI capabilities have enabled systems to perform more complex tasks, including coding, research support, and workflow automation. This has shifted some safety concerns from harmful outputs to the possibility of unintended or harmful actions by autonomous or semi-autonomous systems.

Growing Demand for AI Security Researchers

The growth of fellowship programs coincides with increasing demand for AI security researchers, a relatively small but expanding field. Companies offer competitive compensation and access to computing resources to attract talent as they compete to develop more advanced models. At the same time, governments and regulators are increasing pressure on AI developers to demonstrate that systems can be used safely and reliably.

Although external programs can expand participation in security work, they do not replace internal decision-making processes in AI companies. Researchers participating in grants typically have no direct authority over product releases. Their work is generally advisory and focuses on identifying risks and suggesting risk mitigation strategies. The responsibility for the use of AI systems remains with the companies that build and operate them.

Future Prospects and Expectations

OpenAI said the grant is part of a broader effort to support research and improve understanding of AI risks, but did not specify how findings from the program would inform product decisions.

The first cohort of the OpenAI Safety Fellowship is expected to be selected later this year. For more information, visit the OpenAI website.

For further details, visit the source here.

About the author

John K. Waters is the Editor-in-Chief of a number of Converge360.com websites focused on high-end development, AI, and future technology. He has been writing about cutting-edge technologies and Silicon Valley culture for more than two decades and has written more than a dozen books. He also co-wrote the documentary “Silicon Valley: A 100 Year Renaissance,” which aired on PBS. He can be reached at [email protected].

“`

Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here