News

Intel joins the MLCommons AI Safety Working Group

×

Intel joins the MLCommons AI Safety Working Group

Share this article
Intel joins the MLCommons AI Safety Working Group

Intel is making strides in the field of artificial intelligence (AI) safety and recently become a founding member of the AI Safety (AIS) working group, organized by MLCommons. This marks a significant step in Intel’s ongoing commitment to responsibly advancing AI technologies.

What is the MLCommons AI Safety Working Group?

The MLCommons AI Safety Working Group has a comprehensive mission to support the community in developing AI safety tests and to establish research and industry-standard benchmarks based on these tests. Their primary goal is to guide responsible development of AI systems, drawing inspiration from how computing performance benchmarks like MLPerf have helped to set concrete objectives and thereby accelerate progress. In a similar vein, the safety benchmarks developed by this working group aim to provide a clear definition of what constitutes a “safer” AI system, which could significantly speed up the development of such systems.

Another major purpose of the benchmarks is to aid consumers and corporate purchasers in making more informed decisions when selecting AI systems for specific use-cases. Given the complexity of AI technologies, these benchmarks offer a valuable resource for evaluating the safety and suitability of different systems.

Additionally, the benchmarks are designed to inform technically sound, risk-based policy regulations. This comes at a time when governments around the world are increasingly focusing on the safety of AI systems, spurred by public concern.

To accomplish these objectives, the working group has outlined four key deliverables.

  1. They curate a pool of safety tests and work on developing better testing methodologies.
  2. They define benchmarks for specific AI use-cases by summarizing test results in an easily understandable manner for non-experts.
  3. They are developing a community platform that will serve as a comprehensive resource for AI safety testing, from registering tests to viewing benchmark scores.
  4. They are working on defining a set of governance principles and policies through a multi-stakeholder process to ensure that decisions are made in a trustworthy manner. The group holds weekly meetings to discuss these topics and anyone interested in joining can sign up via their organizational email.
See also  Ukraine News Live: Zelensky Replaces Defence Minister Over Scandal In Wartime Reshuffle

Other articles we have written that you may find of interest on the subject of artificial intelligence:

AIS working group

The AIS working group is a collective of AI experts from both industry and academia. As a founding member, Intel is set to contribute its vast expertise to the creation of a platform for benchmarks that measure the safety and risk factors associated with AI tools and models. This collaborative effort is geared towards developing standard AI safety benchmarks as testing matures, a crucial step in ensuring AI deployment and safety in society.

One of the key areas of focus for the AIS working group, and indeed for Intel, is the responsible training and deployment of large language models (LLMs). These powerful AI tools have the capacity to generate human-like text, making them invaluable across a range of applications from content creation to customer service. However, their potential misuse poses significant societal risks, making the development of safety benchmarks for LLMs a priority for the working group.

To aid in evaluating the risks associated with rapidly evolving AI technologies, the AIS working group is also developing a safety rating system. This system will provide a standardized measure of the safety of various AI tools and models, helping industry and academia alike to make informed decisions about their use and deployment.

“Intel is committed to advancing AI responsibly and making it accessible to everyone. We approach safety concerns holistically and develop innovations across hardware and software to enable the ecosystem to build trustworthy AI. Due to the ubiquity and pervasiveness of large language models, it is crucial to work across the ecosystem to address safety concerns in the development and deployment of AI. To this end, we’re pleased to join the industry in defining the new processes, methods and benchmarks to improve AI everywhere,” said Deepak Patil, Intel corporate vice president and general manager, Data Center AI Solutions.

See also  Trump Tried To Call In To Fox News As The Capitol Riot Unfolded But The Network Refused To Put Him On Air, New Filing Claims

Intel’s participation in the AIS working group aligns with its commitment to the responsible advancement of AI technologies. The company plans to share its AI safety findings, best practices, and responsible development processes such as red-teaming and safety tests with the group. This sharing of knowledge and expertise is expected to aid in the establishment of a common set of best practices and benchmarks for the safe development and deployment of AI tools.

The initial focus of the AIS working group is to develop safety benchmarks for LLMs. This effort will build on research from Stanford University’s Center for Research on Foundation Models and its Holistic Evaluation of Language Models (HELM). Intel will also share its internal review processes used to develop AI models and tools with the AIS working group. This collaboration is expected to contribute significantly to the establishment of a common set of best practices and benchmarks for the safe development and deployment of generative AI tools leveraging LLMs.

Intel’s involvement in the MLCommons AI Safety working group is a significant step in the right direction towards ensuring the responsible development and deployment of AI technologies. The collaborative efforts of this group will undoubtedly contribute to the development of robust safety benchmarks for AI tools and models, ultimately mitigating the societal risks posed by these powerful technologies.

Source and Image Credit :  Intel

Filed Under: Technology News





Latest aboutworldnews Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, aboutworldnews may earn an affiliate commission. Learn about our Disclosure Policy.

See also  The Girl Who Lost Her Mother Before Examination And Yet Cracked And Got AIR 26 In The Result Of UPSC Announced In 2024

Leave a Reply

Your email address will not be published. Required fields are marked *