🕵️‍♂️ OpenAI's secret NDAs exposed

🛑 OpenAI's safety issues surface

TOGETHER WITH HUBSPOT

AITR banner

Welcome to AI Tool Report!

Monday’s top story: OpenAI whistleblowers have accused OpenAI of illegally preventing its employees from reporting safety concerns to federal authorities.

🌤️ This Morning on AI Tool Report

  1. 🚨 OpenAI whistleblower scandal

  2. 💹 How to enhance advertising strategies with AI insights

  3. ⚠️ OpenAI’s safety questioned

  4. 🎓 How to get the greatest AI education

  5. 😃 How to improve user experiences, using ChatGPT

  6. 🆕 Rufus: Amazon's AI unleashed

Read Time: 5 minutes

STOCK MARKETS

Stock tracker

🍃 AI stocks had a mixed week, Apple and NVIDIA consolidating near all-time highs with Microsoft losing nearly 3%. With earnings approaching towards the end of the month investors are waiting for more information. Learn more.

  — — — — — —

ETHICS

🚨 OpenAI whistleblower scandal

OpenAI whistleblower scandal

Our Report: A group of OpenAI whistleblowers has sent a letter to the Securities and Exchange Commission (SEC) Chair—Gary Gensler—accusing OpenAI of using non-disclosure agreements (NDAs) to illegally prevent employees from alerting government regulators about the potential risks posed by its AI technology.

🔑 Key Points:

  • The whistleblowers say that OpenAI’s NDA alludes to employees being punished for raising company concerns to federal authorities, and have asked the SEC to investigate its severance, non-disparagement, and NDAs.

  • These overly restrictive policies “forced employees to waive their rights to whistleblower compensation” and created a fearful environment that “jeopardizes the safety and transparency of developing AI technology.”

  • They’ve urged the SEC to make OpenAI: Produce every NDA it’s issued; remind employees about their right to whistleblow; pay fines for improper NDAs, and correct the ‘chilling effect’ of its past protocols.

🤔 Why you should care: This comes after OpenAI’s employee exit agreements came under fire earlier this year for stating that ex-employees wouldn’t get their vested equity if they refused to sign the exit agreement or violated their NDA, which CEO Sam Altman was “very sorry” about, claiming that OpenAI had “never clawed anything back” and was “already in the process of fixing the standard exit paperwork.”

   — — — — —

Together with Hubspot

Hubspot ad

Unlock the future of advertising with HubSpot's highly anticipated AI Prompt Library!

This no-cost ebook, crafted by HubSpot's top advertising minds, offers curated AI prompts designed to revolutionize your advertising strategy.

Elevate your content, drive more conversions, and stand out in a competitive landscape with precise, effective prompts.

   — — — —

SAFETY

⚠️ OpenAI’s safety questioned 

OpenAI’s safety questioned

Our Report: Following whistleblower accusations about OpenAI’s NDA agreements preventing employees from raising safety concerns with federal authorities (see above story), an anonymous source has now accused OpenAI of rushing product safety tests in favor of glamorous product launches.

🔑 Key Points:

  • According to the source (rumored to be an ex-employee), when recently launching a new product, OpenAI “planned the launch after-party prior to knowing if it was safe to launch.”

  • It comes after employees recently signed an open letter demanding better safety protocols, following the disbandment of its safety team after key members quit because safety consistently “took a backseat to shiny products.”

  • It also follows a statement—made by another anonymous source—that the GPT-4o safety review was compressed into one week, despite OpenAI stating they “didn’t cut corners” on safety, ahead of its launch.

🤔 Why you should care: The brief ousting of CEO, Sam Altman, last year, for failing to be “consistently candid in his communications,” is yet another piece of evidence, highlighting OpenAI’s lack of transparency and prioritization of safety, although OpenAI has tried to combat these latest accusations by announcing a partnership with Los Alamos National Laboratory to help with bio-scientific research, repeatedly pointing to their solid approach to safety, and also publishing an internal scale to track ‘safe’ progress towards Artificial General Intelligence (AGI).

Should OpenAI lead the development of AGI?

Login or Subscribe to participate in polls.

    — — —

AI Report

Why Subscribe?

🌟 Premium Insights: In-depth weekly (exclusive) AI newsletters with strategies and case studies.

🧠 Expert Access: Exclusive live events with industry leaders.

👥 Private Community: Network with over 1,000 AI professionals.

🎁 Extra Tools: Weekly e-giftbox of HubSpot templates, tools, and guides.

📈 Career Boost: Enhance your skills and unlock new opportunities.

PROMPT ENGINEERING

     — —

Monday’s Prompt: How to improve user experiences, using ChatGPT

Type this prompt into ChatGPT:

I want you to be a UX/UI developer. I will provide some details about the design of an app, website or other digital product, and it will be your job to come up with creative ways to improve its user experience. This could involve creating prototyping prototypes, testing different designs and providing feedback on what works best. My first request is "I need help designing an intuitive navigation system for my new mobile applica”

Coding

Results: After typing this prompt, you will get a list of ideas, designed to improve the user experience of your app, website, or digital product.

P.S. Use the Prompt Engineer GPT by AI Tool report to 10x your prompts.

BREAKING NEWS

   

SHOPPING

🆕 Rufus: Amazon's AI unleashed

Our Report: Amazon’s AI-powered shopping assistant–Rufus–which helps customers find and compare products, and offers AI-powered purchase recommendations, is now live for all US customers, on the Amazon app.

🔑 Key Points:

  • Rufus has been trained on Amazon’s product catalog, customer reviews, and community Q&As, and has been tested on so many questions since its initial launch to a select few customers in February.

  • To help Rufus make accurate recommendations, it’s also been trained on ‘publically available’ information, although Amazon hasn’t established whether this includes data from other retail sites.

  • According to Amazon, the range of training data used means the chatbot can also keep customers up to date with related topics such as the weather, fashion trends, or the latest tech.

🤔 Why you should care: During beta testing, customers discovered that Rufus couldn’t always give answers to questions that weren’t related to shopping, it didn’t always get things right, and the quality of recommendations was sometimes lacking, as it was only serving Amazon catalog recommendations.

MONDAY’S READING

    — —

We read your emails, comments, and poll replies daily.

Hit reply and tell us what you want more of!

Until next time, Martin & Liam.

P.S. Don’t forget, you can unsubscribe if you don’t want us to land in your inbox anymore.

What did you think of this edition?

Login or Subscribe to participate in polls.