🛡️ Google’s safer AI

🔒 OpenAI’s AI safety commitment

AITR banner

TOGETHER WITH INCOGNI

Welcome to AI Tool Report!

Thursday’s top story: Google has released three new “safer”, “smaller”, and “more transparent” AI models that are “open” for anyone to access. 

🌤️ This Morning on AI Tool Report

  1. 🛡️ Google’s new, safer “open” AI

  2. đź’» How to protect your personal info with Incogni

  3. 🔒 OpenAI’s commitment to US AI safety

  4. ⚙️ How to scale your business with intelligent automation

  5. 🗓️ How to automate events in Calendly and Google Calendar

  6. đź’ˇ Safer searches with Google?

  7. 🌮 Taco Bell's AI drive-thru

Read Time: 5 minutes

STOCK MARKETS

Stock tracker

⬆️ AI stocks bounce back with a large day of gains across the market. NVIDIA was the biggest, rising +12.81%. This rally was driven by FED chairman Jerome Powell hinting at a rate cut being on the table as early as September. Learn more.

What do you think of our new AI stock market tracker?

Login or Subscribe to participate in polls.

 â€” — — — — — —

LAUNCHES

Google’s new, safer “open” AI

Our Report: Google has released three, new AI models—Gemma 2 2B, ShieldGemma, and Gemma Scope—that, unlike Google’s “closed” Gemini models, are “open” (giving developers access to their source code) and although designed for different use cases, all share a common focus on safety and sit within the family of Gemma 2 models (which launched in May).

🔑 Key Points:

  • Gemma 2 2B is a lightweight model built for generating and analyzing text that can be used for research and commercial applications and, due to its size, can operate on laptops and edge devices.

  • ShieldGemma is a safety shield designed to detect, monitor, and filter out toxic and harmful content and prompts—like hate speech, harassment, and explicit material—enhancing the safety of AI models.

  • Gemma Scope allows developers to “zoom in” on specific parts of the Gemma 2 models’ inner workings and gain key insights into how it identifies patterns, processes information, and makes its predictions.

🤔 Why you should care: The launch of these three “safe” and “open” AI models is a relevant and timely release from Google, as it comes just after the US Commerce Department released a report (earlier this week) endorsing the use of “open” AI models, saying they benefit small companies, researchers, and individual developers and foster innovation and healthy competition within the AI industry, while also highlighting the importance of safety.

 â€” â€” — — — — —

Together with Incogni

Incogni image

You’ve likely received a sketchy call, text, or email asking for $$$. It might've been easy to spot, but with deepfakes and AI, scams are getting trickier.

Scammers use your personal data, often bought legally from data brokers who sell your mobile number, DOB, SSN, and more. Incogni scrubs your data from the web, taking on 175+ data brokers.

Unlike others, Incogni deletes your info from all broker types, including people search sites where anyone can buy your details for a few bucks.

— â€” â€” — — — —

SAFETY

đź”’ OpenAI’s commitment to US AI safety 

OpenAI’s commitment to US AI safety

Our Report: In a post on X (formerly Twitter), OpenAI CEO—Sam Altman—confirmed that OpenAI has committed to giving the US AI Safety Institute (a federal agency focused on mitigating risks in AI models) early access to its next AI model.

🔑 Key Points:

  • The commitment mirrors a similar one OpenAI made with the UK’s AI Safety Institute, in June, and could be an attempt to counter the narrative that OpenAI prioritizes the launch of new products over safety.

  • OpenAI dissolved its “superintelligent” safety team (which was dedicated to safeguarding harmful AI systems) and has been slammed by ex-employees over its questionable approach to safety.

  • In the same post on X, Altman reminded us that OpenAI is “committed to allocating 20% of computing resources to safety efforts” and that they’ve “voided non-disparagement terms” that discouraged whistleblowing.

🤔 Why you should care: Although Altman stated that OpenAI has “worked hard to make it right,” many critics remain skeptical about the timing of its collaboration with the US AI Safety Institute as it comes directly after it endorsed the “Future of Innovation,” a proposed bill that would authorize the AI Safety Institute as an executive body that sets standards and guidelines for AI models, a body which Altman himself is a part of, leaving many to assume that this is just an attempt to exert influence over AI policymaking.

— â€” â€” â€” — — —

EleventhAI ad

Don't let your business fall behind in the tech race.

But where do you start?

EleventhAI takes care of your AI and Automation needs.

Benefit from innovative end-to-end solutions that result in:

  • Saved time & budget

  • Operational excellence

  • A Future-proof organization

AUTOMATION OF THE WEEK

— â€” â€” â€” â€” — —

Zapier automation

This week’s automation allows you to set up an automation that syncs your Calendly events with Google Calendar, notifies your team via Slack, and adds a reminder in Google Calendar. This automation will ensure you and your team don’t miss any meetings and stay organized.

Want to know how to set this up in detail? Read this

PREMIUM INSIGHTS

Pro newsletter

Say goodbye to missed meeting details with Fireflies! This AI tool records, transcribes, and summarizes your video meetings, ensuring all critical information is captured.

Integrated with Zoom, Google Meet, and Microsoft Teams, it fits seamlessly into your workflow. Boost team collaboration by sharing summaries via Slack or embedding them into Notion for real-time knowledge retrieval.

Dive deeper into these game-changing strategies.

BREAKING NEWS

— â€” â€” â€” — — —

SEARCH

đź’ˇ Safer searches with Google? 

  • Google is launching new safety features to reduce exposure to explicit deepfake image search results, giving people “added peace of mind, especially if they’re concerned about similar content popping up in the future.”

  • When a user asks Google to delete explicit, nonconsensual fake content (using a more streamlined process), Google’s algorithms will filter out similar explicit results and searches and delete duplicate images.

  • Google Search will also now detect search terms that have the potential to show explicit deepfake content, lower the visibility of this content (and its associated websites), and just show users real content.

— â€” â€” â€” — — —

VOICE AI

🌮 Taco Bell's AI drive-thru

  • Yum Brand—parent of the takeout restaurant, Taco Bell—has announced it will roll out its AI voice tech (currently integrated into just 100 drive-thrus) to its 7,200 US-based Taco Bells, by the end of 2024,

  • This comes after two years of fine-tuning and testing the drive-thru AI voice technology and finding it improved order accuracy, reduced wait times, decreased employees’ task load, and fueled profitable growth.

  • AI voice tech in drive-thrus gets it wrong if it hasn’t been trained on enough voice data to understand accents, which McDonald’s found after its customers complained its AI system was getting their orders wrong.

We read your emails, comments, and poll replies daily.

Hit reply and tell us what you want more of!

Until next time, Martin & Liam.

P.S. Don’t forget, you can unsubscribe if you don’t want us to land in your inbox anymore.

What did you think of this edition?

Login or Subscribe to participate in polls.