🚨 How safe is GPT-4o?

🎓 OpenAI welcomes AI safety expert

AITR banner

TOGETHER WITH INNOVATING WITH AI

Welcome to AI Tool Report!

Friday’s top story: OpenAI has released a safety testing report that reveals its newest model—GPT-4o—is “medium risk” when it comes to swaying public opinion with its text.

🌤️ This Morning on AI Tool Report

  1. 🚨 OpenAI’s GPT-4o: Unsafe?

  2. 🧑‍💼 How to become an AI consultant

  3. âť—EU wins against Musk

  4. 👶 How to give your kids a head start with AI, in the safest way

  5. 🎯 How to create targeted ads using ChatGPT

  6. 🎓 OpenAI welcomes AI safety expert

  7. 🛑 Amazon investigated for AI partnership

Read Time: 5 minutes

STOCK MARKETS

Stock tracker

⬆️ AI and tech names rally as concerns about a recession recede from the forefront of investors’ minds. Buyers stepped in at a crucial point pushing NVIDIA above the $100 psychological area which will be key to hold if it is to continue the larger uptrend. Learn more.

 â€” — — — — — —

SAFETY

GPT-4o: Unsafe?

Our Report: OpenAI has released a research report (called a System Card) that outlines how an external group of red teamers (security experts that exploit weaknesses and highlight risks within AI models) safety tested its newest GPT-4o model before it was released in May, and found it to be “medium risk.”

🔑 Key Points:

  • To find weaknesses and risks, the red-teamers ran GPT-4o through four categories of tests: Cybersecurity; Biological Threats; Persuasion, and Model Autonomy, and found it was “low risk” in all, except “Persuasion.”

  • Although the GPT-4o voice feature was found to be “low risk”, red-teamers found that 3 out of 12 writing samples from GPT-4o were better at swaying readers' opinions than human-written content.

  • GPT-4o’s output was only more persuasive than human-written content 1/4 of the time, but it was specifically tested for the model's ability to persuade human political opinions, just ahead of US elections.

🤔 Why you should care: OpenAI has released this “System Card” to demonstrate that they’re taking safety very, very seriously, after facing increasing backlash over their prioritization of “shiny new products” over safety (with swift exits by key team members and reports from ex-employees confirming this), and most recently an open letter from Senator, Elizabeth Warren, demanding answers about how OpenAI handles safety reviews, but the real question is: Is there a potential risk that GPT-4o is capable of spreading misinformation or being used by bad actors to sway public voting during elections?

Should we worry that GPT-4o could sway public voting during elections?

Login or Subscribe to participate in polls.

 â€” â€” — — — — —

Together with Innovating with AI

Innovating with AI

Our friends at Innovating with AI just welcomed 170 new students into The AI Consultancy Project, their new program that trains you to build a business as an AI consultant. Here are some highlights...

  • The tools and frameworks to find clients and deliver top-notch services

  • A 6-month plan to build a 6-figure AI consulting business

  • Early access to the next enrollment cycle for AI Tool Report readers

— â€” â€” — — — —

DATA PROTECTION

âť—EU wins against Musk

EU wins against Musk

Our Report: Elon Musk has agreed to (temporarily) stop using data from European X (formerly Twitter) users to train its AI chatbot, Grok, after the Irish Data Protection Commission (DPC) instigated court proceedings over concerns over X’s handling of personal data, without consent.

🔑 Key Points:

  • Although he’s agreed to stop processing European X users' data, Musk thinks the DPC’s court order is “unwarranted” as users can untick a box in their privacy settings to opt out of having their data used for training.

  • While the DPC has welcomed Musk’s cooperation to suspend data collection, it argues that X began processing EU users' data on May 7th, but only offered the option to opt out (to some users) from July 16th.

  • As a result, during a hearing on Thursday, Judge Leonie Reynolds established that all data collected from EU X users between May 7th and August 1st would not be used for training until the court issues its ruling.

🤔 Why you should care: This is yet another example of how regulatory scrutiny is intensifying in Europe, and X’s decision to comply, follows a similar decision made by Meta—who decided not to launch its newest AI model, Llama 3, after facing scrutiny from the Irish DPC over how it used and processed data—and Google, who agreed to delay and change its Gemini chatbot over similar concerns.

— â€” â€” â€” — — —

What causes ocean waves? Yes, that’s an actual question from one of the curious kids currently using Angel AI.

Angel is an AI-powered browser that enables kids to safely explore the online world through age-appropriate content and experiences. Through an engaging, voice-activated app, your child can ask questions that spark their curiosity and imagination.

With Angel, you can give your kids a head start with AI in the safest way possible. It’s time to make learning fun and age-appropriate for kids

PROMPT ENGINEERING

— â€” â€” â€” â€” — —

Friday’s Prompt: How to create targeted ads using ChatGPT

Type this prompt into ChatGPT:

âťť

Create a series of advertisements that resonate with our brand's personality and appeal to our target market

Advertising

Results: After typing this prompt, you will get a series of targeted, ad ideas that will represent your brand, and resonate with your target audience.

P.S. Use the Prompt Engineer GPT by AI Tool report to 10x your prompts.

BREAKING NEWS

— â€” â€” â€” — — —

ANNOUNCEMENTS

🎓 OpenAI welcomes AI safety expert

  • OpenAI has added a new member to its board of directors: Professor and Director of the ML department at Carnegie Mellon University—Zico Kolter—who predominantly focuses his research on AI safety.

  • Kolter will also join OpenAI’s Safety and Security Committee—responsible for overseeing safety decisions—but as it’s mainly comprised of internal employees, many are questioning its effectiveness.

  • This is a strategic move from OpenAI, that comes at a pivotal moment, as they struggle to combat the influx of criticism over how they handle safety, following resignations and damning reports from ex-employees.

— â€” â€” â€” — — —

REGULATIONS

🛑 UK investigates Amazon's AI plans

  • The UK’s Competition and Markets Authority (CMA) has “sufficient information” to formally investigate Amazon’s relationship with AI start-up, Anthropic (makers of chatbot, Claude), after it invested $4B in the company.

  • The UK competition regulators feel that their partnership is equivalent to a “quasi-merger” (where big tech firms invest in/hire staff from start-ups to gain a monopoly) that could harm UK competition.

  • In response, Anthropic insists it’s an “independent company” and Amazon is “disappointed” in the CMA, believing that its collaboration with Anthropic “doesn’t meet the CMA’s threshold for review.”

We read your emails, comments, and poll replies daily.

Hit reply and tell us what you want more of!

Until next time, Martin & Liam.

P.S. Don’t forget, you can unsubscribe if you don’t want us to land in your inbox anymore.

What did you think of this edition?

Login or Subscribe to participate in polls.