These are top 10 stocks traded on the Robinhood UK platform in July
Investing.com -- OpenAI, a leading research organization in artificial intelligence, has announced a series of enhancements to its security initiatives, including the expansion of its Cybersecurity Grant Program and the introduction of increased rewards in its Security Bug Bounty Program. This comes as part of OpenAI’s ongoing commitment to security excellence as it progresses towards the development of artificial general intelligence (AGI).
The Cybersecurity Grant Program, launched two years ago, has reviewed over a thousand applications and funded 28 research initiatives. The program aims to advance the fields of AI and cybersecurity, and has provided valuable insights into areas such as secure code generation, prompt injection, and autonomous cybersecurity defenses. OpenAI is now accepting proposals for a wider range of projects, focusing on areas like software patching, model privacy, detection and response, security integration, and agentic security. The organization is also offering microgrants, in the form of API credits, for high-quality proposals.
In addition to the grant program, OpenAI is engaging with researchers and practitioners in the cybersecurity community to share findings and leverage the latest thinking. The organization partners with experts in academic, government, and commercial labs to train its models and benchmark skills gaps across cybersecurity domains. This collaboration has led to impressive results in areas such as code security.
OpenAI’s Security Bug Bounty Program rewards security researchers for identifying vulnerabilities and threats within the organization’s infrastructure and products. OpenAI has significantly increased the maximum bounty payout from $20,000 to $100,000, reflecting the organization’s commitment to rewarding meaningful, high-impact security research. To celebrate the expansion of the bug bounty program, OpenAI is also launching limited-time promotions, offering additional bounty bonuses for qualifying reports within specific categories.
As OpenAI moves closer to AGI, the organization is proactively adapting to evolving security threats by building comprehensive security measures directly into its infrastructure and models. This includes leveraging its own AI technology to scale its cyber defenses, developing advanced methods to detect cyber threats and respond rapidly, and partnering with renowned security research experts SpecterOps to rigorously test its security defenses.
OpenAI is also investing in understanding and mitigating the unique security and resilience challenges that arise with the development of advanced AI agents. This includes developing robust alignment methods to defend against prompt injection attacks and implementing agent monitoring controls to quickly detect and mitigate unintended or harmful behaviors.
Security is a cornerstone of OpenAI’s next-generation AI projects, such as Stargate. The organization works with its partners to adopt industry-leading security practices, including zero-trust architectures and hardware-backed security solutions. OpenAI is also expanding its security program and is seeking passionate engineers in several areas.
OpenAI serves more than 400 million weekly active users across businesses, enterprises, and governments worldwide. As OpenAI’s models and products advance, the organization remains fully dedicated to a proactive, transparent approach to security, driven by rigorous testing, collaborative research, and the goal of ensuring the secure, responsible, and beneficial development of AGI.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.