Google, OpenAI, and others have just made a huge security pledge to AI

Representatives from 7 of the world’s biggest names in artificial intelligence – OpenAI, developer of ChatGPT, Microsoft, Google, Amazon, Meta, Anthropic and Inflection – agreed to develop A fingerprint system that will reduce the risk of fraud and deception associated with the use of AI-generated works.

The basic idea isDevelop robust technical mechanisms to ensure users know when content is generated by AI, such as a watermark systemIt’s kind of like DNA fingerprinting, but for AI-generated works like paragraphs of text, lines of code, visual illustration, or audio clip.

The watermark may not be visible, but it will be embedded on a technical level Such verification is probably infallible. AI proofreading tools do exist, but they are prone to error, which can have devastating professional and academic consequences. While teachers and professors warn about students using AI tools to cheat, students worry that AI proofreading tools will incorrectly flag their work as plagiarism.

Even OpenAI Warn Its classification tool for distinguishing text written by a human from text written by a machine is not “Totally unreliable“.

Right now, the details of the AI ​​watermarking system are scant, but it’s definitely a step in the right direction, especially at a time when AI tools pose functional risks and have opened up a world of gimmicks.

Positive and collective progress

The focus is on safety, transparency, security and trust. Companies invited to the White House conference agreed to have their AI tools tested internally and externally by experts before entering the public domain. AI Labs has also pledged to never stop working in the face of the threats posed by AI, and has pledged to share its knowledge with industry experts, academic experts, and civil society.

Regarding security, The companies have pledged to put internal safeguards in place and only release their AI models after they have been put through rigorous testing. To reduce cyber security risks, it was also agreed that these big names in AI would allow independent experts to check their products and that there would also be an open channel for reporting vulnerabilities.

Another notable element of the commitment is that the AI ​​Labs will report “The capabilities and limitations of their AI systems, as well as areas of appropriate and inappropriate useThis point is very important, as current-generation AI systems present well-known issues of accuracy and bias in multiple forms.

Finally, the makers of AI tools have also agreed to dedicate efforts and resources to developing AI in a way that contributes to the well-being of society rather than harming it. Efforts will be made to use artificial intelligence to solve problems such as the climate crisis and cancer research.

AI experts and industry players have already signed pledges to develop responsible AI, which is another ethical achievement of AI.

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *