This week in AI was defined by growing tension between acceleration and accountability. Governments are stepping in with new regulations. Workers are demanding a seat at the table. Tech giants are doubling down on proprietary models and agent platforms, while the latest industry report shows adoption is skyrocketing, but safety is still underfunded. From California’s first-of-its-kind chatbot law to Meta’s push for 5X productivity, here’s what you need to know.
Listen to the AI-Powered Audio Recap
This AI-generated podcast is based on our editor team’s AI This Week posts. We use advanced tools like Google Notebook LM, Descript, and Elevenlabs to turn written insights into an engaging audio experience. While the process is AI-assisted, our team ensures each episode meets our quality standards. We’d love your feedback—let us know how we can make it even better.
🛡️ Policy and Regulation
AFL-CIO Pushes for Worker Protections in the Age of AI
The AFL-CIO, the largest federation of unions in the United States, is taking a firm stand on how AI should evolve in the workplace. In a new campaign called the “workers first initiative on AI,” the group is calling for stronger labour protections, greater transparency, and a seat at the table for unions in shaping AI development and deployment.
The federation, which represents nearly 15 million workers across 63 unions, says it wants to shift the narrative away from technological disruption and toward collective governance. Its priorities include protecting jobs from algorithm-driven layoffs, defending against AI-powered surveillance, and pushing for retraining programs to help workers move into new roles created by AI.
The campaign also supports stricter regulations. The AFL-CIO is backing state and national legislation that would require human oversight in workplace decisions made by AI, such as firings or performance reviews. In California, the group helped pass Senate Bill 7, which aimed to prevent automated discipline without human involvement. Governor Gavin Newsom ultimately vetoed the measure, but the AFL-CIO says it will continue to advocate for similar policies across the country.
Ed Wytkind, interim director of the AFL-CIO’s Technology Institute, points to history for precedent. He argues that just as unions helped shape the adoption of automation in the automotive industry during the 20th century, they can guide the responsible integration of AI today. He also notes that contract negotiations have long been a tool to limit surveillance in the workplace, and that same approach can be used to challenge modern AI-driven monitoring tools.
Beyond workplace protections, the AFL-CIO wants workers involved in the research phase of AI systems, especially those funded by public money. Giving unions a voice early in the process could help avoid wasteful or unsafe technology rollouts.
Despite recent setbacks, the federation is undeterred. Wytkind sees momentum building on both sides of the political spectrum. And with tech companies forming their own political action committees to support pro-AI messaging, unions are ramping up spending too. The AFL-CIO’s California chapter spent more than $2 million on political donations in 2024, significantly more than the year prior.
This marks the first time the AFL-CIO has launched a unified technology agenda spanning all sectors. The message is clear: AI affects everyone, and workers deserve a say in shaping its future.
New California Law Targets AI Chatbot Harms
California has passed a landmark law aimed at regulating AI companion chatbots, becoming the first state in the US to impose safety and accountability requirements on this emerging category of tools. Governor Gavin Newsom signed SB 243 into law on Monday, setting a new standard for how companies must protect users, especially children and vulnerable individuals, when deploying AI companions.
The legislation requires chatbot platforms to implement age verification, content warnings, and clear disclosures that conversations are AI-generated. It also bans chatbots from impersonating healthcare professionals and mandates features such as break reminders for minors and restrictions on sexually explicit AI-generated content.
SB 243 arrives amid growing concern about the mental health risks tied to AI interactions. The bill gained traction following high-profile tragedies, including the suicide of a teenager after extended chats with ChatGPT, and a lawsuit in Colorado involving Character AI. These events highlighted the lack of guardrails in a space that has evolved faster than the regulatory response.
The law also introduces steep penalties for those profiting from illegal deepfakes, with fines reaching up to $250,000 per offence. Companies must set up protocols for handling self-harm risks and report related data to the state’s Department of Public Health, including how often users receive crisis center notifications.
Some companies have started to respond. OpenAI is rolling out new parental controls and detection systems. Replika says it already implements content filtering and provides access to crisis resources. Character AI, also named in the legislation, stated it supports working with lawmakers and has already added disclaimers clarifying that its conversations are fictional.
SB 243 will take effect in January 2026, joining SB 53, another recent California law focused on transparency for large AI companies. While other states like Illinois and Utah have moved to limit AI in mental health contexts, California’s new law sets a broader precedent, targeting not only misuse but also the underlying systems that allow it to happen.
As state senator Steve Padilla put it, this is about creating accountability before it is too late. With federal regulation still stalled, California’s move may serve as a blueprint for how states can step in to protect users in the age of conversational AI.
🏢 Corporate Strategy and Enterprise Tools
Salesforce Pushes Deeper Into AI Agents with Agentforce 360
The race for enterprise AI dominance is picking up speed. Salesforce has introduced Agentforce 360, a significant upgrade to its AI agent platform. The platform includes new tools for building, testing, and deploying AI agents, plus deeper integrations with Slack, as the company positions itself as a central player in the evolving AI workspace.

At the core of the update is Agent Script, a new prompting system designed to help users create AI agents that can handle flexible if/then logic. The goal is to move beyond pattern-based responses and allow agents to operate with more context and reasoning. This feature enters beta in November and will support models from Anthropic, OpenAI, and Google Gemini.
Salesforce is also rolling out Agentforce Builder, a centralized platform for managing the entire lifecycle of an AI agent. Included in the toolkit is Agentforce Vibes, a new “vibe coding” interface tailored for enterprise use. The company hopes these tools will help teams deploy intelligent agents more efficiently and with greater control.
Slack is getting upgrades, too. Core Salesforce apps such as Sales, IT, and HR will be embedded directly into Slack, making it easier for users to access data and workflows without switching platforms. A new version of Slackbot is also in testing. It acts more like a personal assistant, learning from the user’s behaviour to offer timely insights and suggestions.
Looking ahead, Salesforce wants Slack to serve as a broader enterprise search interface. By early 2026, new connectors with Gmail, Outlook, and Dropbox are expected to go live.
The update comes as other major players make their own enterprise AI pushes. Google’s new Gemini Enterprise suite has landed high-profile clients like Figma and Klarna, while Anthropic just signed a major deal with Deloitte to roll out Claude across 500,000 employees. That was quickly followed by a strategic partnership with IBM.
Salesforce claims that 12,000 customers are now using Agentforce, more than any competing platform. Companies like Lennar, Adecco, and Pearson are among the first to test-drive the 360 update. Still, questions remain. A recent MIT study found that most enterprise AI pilots fail before reaching production. While platforms multiply, proof of return on investment is still lagging.
Microsoft Debuts Its First In-House Image Generator
Microsoft AI has unveiled its first internally developed text-to-image model, MAI-Image-1, marking a new phase in the company’s growing ambitions to build proprietary AI tools. Until recently, Microsoft leaned heavily on its partnerships with OpenAI, but this new release signals a deeper shift toward building its own foundation models across voice, chat, and image generation.
MAI-Image-1 is designed to produce high-quality, photorealistic outputs. The company says it performs especially well with scenes involving natural elements like lightning and landscapes. According to Microsoft, one of its goals during development was to avoid the overused visual styles seen in many popular generators today. The team gathered feedback from artists and designers early in the process to help shape the results.

Speed is also a key focus. Microsoft claims that MAI-Image-1 can generate outputs faster than larger models, without compromising quality. Its performance has already landed it in the top 10 on LMArena, a public leaderboard where human users compare model outputs and vote for the best results.
This launch adds to Microsoft’s growing AI portfolio. The company already introduced a voice model, MAI-Voice-1, and an early version of its chatbot, MAI-1-preview. As Microsoft expands its model lineup, it has also started integrating Anthropic’s models into Microsoft 365 and investing heavily in internal AI research.
While details on safety measures remain limited for now, Microsoft says it plans to share more soon and is committed to responsible deployment. The release of MAI-Image-1 shows the company is no longer just a distributor of third-party models. It wants a seat at the table as a builder.
🧩 Industry TrendsIndustry Trends
AI Goes Mainstream, But Safety Still Lags Behind
The latest State of AI Report from Air Street Capital paints a picture of a booming industry that is charging ahead in performance, adoption, and infrastructure, but is leaving serious gaps in safety and accountability.
OpenAI still leads in research, but China’s open-weight models like Qwen have made huge gains. These now power 40% of all new fine-tunes on Hugging Face. That shift reflects a broader trend: China is firmly in second place, challenging the dominance of US labs with scalable open ecosystems and massive state investment.
The numbers signal a turning point. Forty-four percent of US companies now pay for AI tools, up from just 5% in 2023. Average contract values have surged to over $500,000 this year. DeepMind and OpenAI are driving rapid efficiency gains, with AI capability per dollar doubling every few months. This is fueling even more adoption. But as development accelerates, AI safety funding remains anemic. Only $133 million has been committed across 11 major organizations, which is less than what frontier labs spend in a single day.
Scientific breakthroughs are now happening with AI as a co-pilot. Models like Gödel-LM are publishing formal proofs. DeepMind’s Co-Scientist is exploring new hypotheses. Protein models like ProGen3 are opening doors to novel gene editors. AI is now actively shaping biology, physics, and mathematics.
On the infrastructure front, Stargate is a proposed $500 billion AI mega-cluster in the US. It marks the beginning of the industrial AI era and is backed by major figures like Sam Altman, Masayoshi Son, and Larry Ellison. Nations are also racing to secure their positions. China is pouring billions into power infrastructure and compute capacity, while the EU struggles to implement its AI Act.
The report doesn’t shy away from the risks. From alignment-washing to model misuse, the threats are escalating. Despite this, public adoption continues to soar. Among 1,200 AI practitioners surveyed, 95% use AI in daily life, and most pay out of pocket for access to tools. The promise of productivity is beating out concerns, at least for now.
Meta’s AI Mandate: From Productivity Tool to Workplace Requirement
At Meta, AI is no longer an optional enhancement. It’s becoming the foundation of how work gets done. An internal message from Vishal Shah, VP of Metaverse, urged employees to “Think 5X, not 5%”, which is a clear directive to use AI not just for marginal gains but for a complete reimagining of productivity. AI4P, or AI for Productivity, is now the rallying cry across teams.
Shah’s message calls for AI to be woven into every central codebase and workflow, including roles beyond engineering. Product managers, designers, and cross-functional partners are being encouraged to build, prototype, and fix using AI tools, pushing past traditional silos. The expectation? That 80 percent of the Metaverse workforce will integrate AI into their daily routines by year’s end.
This top-down pressure reflects a broader trend in big tech. With Meta CEO Mark Zuckerberg stating that most of the company’s code will be AI-written within 18 months, and hiring processes now allowing AI use in coding interviews, AI is rapidly shifting from a support role to a central one.
But not everyone is aligned with the pace of change. Developers have started raising concerns about what some are calling “vibe coding”—AI-generated code that works, but is difficult to understand or debug. As automation accelerates, human oversight becomes more essential, not less.
The message from leadership is clear: fall behind on AI adoption, and you fall behind, period. For Meta, the future of work isn’t just faster. It’s fundamentally different.
Keep ahead of the curve – join our community today!
Follow us for the latest discoveries, innovations, and discussions that shape the world of artificial intelligence.