What's Hot

    Fact of the Week – 4/20/2026 

    April 19, 2026

    Success Stories: AI Advances Disease Knowledge and Treatment

    April 19, 2026

    A Strategic Case for AI Adoption in Combination with Robotics

    April 19, 2026
    Get your Copy Today
    Facebook Twitter YouTube LinkedIn
    Facebook Twitter YouTube LinkedIn
    Connected WorldConnected World
    • SPM
    • Sustainability
    • Projects
    • Technology
    • Constructech
    • Awards
      • Top Products
      • Profiles
    • Living Lab
    Connected WorldConnected World
    Peggy's Tech Blog

    Creating Safety Benchmarks for AI

    Updated:November 13, 2023No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn WhatsApp Pinterest Email

    Generative AI (artificial intelligence) brings great benefits to society and industry. We are seeing the rise of such technology in aviation and airlines, power and utilities, and even places like at the dentist. Data is the new oil, and if companies can capture and capitalize on it, then they have a leg up in today’s competitive, labor-constrained market. Still, there are challenges that remain.

    Perhaps the biggest challenge is the safety of it all. Cybersecurity is a huge concern for many businesses, as they leverage new, emerging technologies, but digging a bit deeper there are other safety concerns to consider as well.

    Misinformation and bias can be just as dangerous. Consider healthcare. Much of the data that exists in the healthcare industry today is based on those who could afford it in the past. This means lower-income families or developing nations simply don’t have the data, which skews the sample.

    And then, there is misinformation that can come because of generative AI. As a journalist, I know how important fact checking is on any project because misinformation is everywhere—and I mean everywhere. Last year, USC (University of Southern California) researchers found bias exists in up to 38.6% of facts used by artificial intelligence. That is something we simply cannot ignore.

    Many organizations recognize these concerns and others as it relates to safety and artificial intelligence—and some are taking steps to address it. Consider the example of MLCommons, which is an AI benchmarking organization. At the end of October, it announced the creation of the AI Safety Working Group, which will develop a platform and pool of tests from many contributors to support AI safety benchmarks for diverse use cases.

    The AIS working group’s initial participation includes a multi-disciplinary group of AI experts including: Anthropic, Coactive AI, Google, Inflection, Intel, Meta, Microsoft, NVIDIA, OpenAI, Qualcomm Technologies, Inc., and academics Joaquin Vanstoren from Eindhoven University of Technology, Percy Liang from Stanford University, and Bo Li from the University of Chicago. Participation in the working group is open to academic and industry researchers and engineers, as well as domain experts from civil society and the public sector.

    As an example, Intel plans to share AI safety findings and best practices and processes for responsible development such as red-teaming and safety tests. As a founding member, Intel will contribute its expertise and knowledge to help create a flexible platform for benchmarks that measure the safety and risk factors of AI tools and models.

    All in all, the new platform will support defining benchmarks that select from the pool of tests and summarize the outputs into useful, comprehensible scores. This is very similar to what is standard in other industries such as automotive safety test ratings and energy star scores.

    The most pressing priority here for the group in the beginning will be supporting rapid evolution of more rigorous and reliable AI safety testing technology. The AIS working group will draw upon the technical and operational expertise of its members, and the larger AI community, to help guide and create the AI safety benchmarking technologies.

    One of the initial focuses will be developing safety benchmarks for LLMs (large language models), which will build on the work done by researchers at Stanford University’s Center for Research on Foundation Models and its HELM (Holistic Evaluation of Language Models).

    While this is simply one example, it is a step in the right direction toward making AI safer for all, addressing many of the concerns related to misinformation and bias that exist among many industries. As the testing matures, we will have more opportunities to use AI in a way that is safe for all. The future certainly is bright.

    Want to tweet about this article? Use hashtags #IoT #sustainability #AI #5G #cloud #edge #futureofwork #digitaltransformation #green #ecosystem #environmental #circularworld

    5G AI Artificial Intelligence Circular Circular World Cloud Digital Transformation Edge Featured Future of Work Generative AI IoT Peggy’s Tech Blog Safety Security Sustainability Sustainable Ecosystem Environmental
    Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email

    Related Posts

    Success Stories: AI Advances Disease Knowledge and Treatment

    April 19, 2026

    U.S. Manufacturing’s Next Chapter

    April 14, 2026

    AI in Construction Step 4: Creating a Roadmap

    April 13, 2026

    Big 5: Construction Safety

    April 13, 2026

    Success Stories: Trustworthy AI

    April 12, 2026

    MES and the Physical AI Revolution

    April 7, 2026
    Add A Comment

    Comments are closed.

    Peggy Smedley Show on YouTube
    Inside the Minds of Leaders
    https://youtu.be/6scYLuSQiq8
    Get Your Copy Today
    ABOUT US

    Connected World works to expand quality of life and influence a sustainable future through digital transformation, innovation, and create opportunities all around.

    We’re accepting new partnerships and radio guests right now.

    Email Us: info@specialtypub.com

    4611 Hard Scrabble Road
    Suite 109-276
    Columbia, SC  29229

     

    Our Picks
    • Fact of the Week – 4/20/2026 
    • Success Stories: AI Advances Disease Knowledge and Treatment
    • A Strategic Case for AI Adoption in Combination with Robotics
    Specialty Publishing Media

    Questions? Please contact us at info@specialtypub.com

    Press Room

    Privacy Policy

    Media Kit – Connected World

    Media Kit – Peggy Smedley Show

    Media Kit – Constructech

    Facebook Twitter YouTube LinkedIn
    © 2026 Connected World.

    Type above and press Enter to search. Press Esc to cancel.