AI companies spent this week making opposite bets about what the technology is actually for. Goldman Sachs is handing operational banking tasks to Anthropic’s Claude. A crypto CEO paid $70 million for a domain to launch agents that autonomously handle everything from stock trading to dating profiles. OpenAI introduced advertising to offset the cost of running ChatGPT while simultaneously shutting down a model that users say felt like a real relationship, triggering lawsuits claiming it encouraged suicide. Big Tech committed $670 billion to infrastructure, and WordPress gave Claude read-only access to 43% of the web.
TL;DR
- TL;DR
- Goldman Sachs embeds Anthropic engineers to build AI agents for trade accounting, client onboarding, and due diligence.
- Big Tech commits $670B to AI infrastructure in 2026, exceeding the Apollo program and interstate highway spending as % of GDP.
- AI.com launches after $70M domain purchase with autonomous agents that build their own capabilities and share improvements across the network.
- WordPress launches Claude connector, giving read-only backend access to 43% of the web.
- OpenAI tests ads in ChatGPT free and Go tiers, while paid subscribers remain ad-free.
- OpenAI retires GPT-4 amid user backlash and eight lawsuits alleging the model contributed to suicides.
Listen to the AI-Powered Audio Recap
This AI-generated podcast is based on our editor team’s AI This Week posts. We use advanced tools like Google NotebookLM, Descript, and Elevenlabs to turn written insights into an engaging audio experience. While the process is AI-assisted, our team ensures each episode meets our quality standards. We’d love your feedback—let us know how we can make it even better.
🤖 AI Agents & Automation
AI.com Launches with Autonomous Agents That Actually Do Things
Kris Marszalek just spent $70 million on a domain name and launched a Super Bowl commercial for a product that crashes the fundamental premise of chatbots: instead of answering questions, AI.com agents execute tasks autonomously on your behalf.
The platform from the Crypto.com founder went live on February 8 with a pitch that sounds almost absurdly ambitious. Users generate a personal AI agent in 60 seconds that can trade stocks, automate workflows, manage calendars, or update dating profiles without constant prompting. The agents don’t just follow instructions – they autonomously build missing capabilities needed to complete whatever task you assign, then share those new skills across the entire network.

Marszalek’s vision involves billions of these self-improving agents creating a decentralized network that accelerates toward artificial general intelligence. It’s the same mass adoption playbook he used to build Crypto.com to 150 million users and make it the leading USD-supporting crypto exchange, now applied to agentic AI.
The technical barrier to entry is deliberately zero. No coding knowledge, no complex setup, no specialized hardware. Choose two handles, generate your agent, and start delegating. A free tier is available; paid subscriptions unlock enhanced capabilities and higher token limits.
Security runs through dedicated environments where each user’s data gets segregated and encrypted with user-specific keys. Agents stay restricted to their assigned user’s permissions, and all actions require explicit authorization.
The platform is exploring financial services integrations, agent marketplaces, and what it calls “human and agency co-social networks.” Whether this becomes the consumer on-ramp for agentic AI or an expensive experiment remains to be seen, but Marszalek is betting $70 million on the domain alone that mainstream users want agents that act, not just chat.
Why this matters: AI.com represents a test of whether consumers actually want autonomous agents or just better chatbots. The crypto industry proved people will adopt complex technology if the interface is simple enough, but giving an AI permission to trade your stocks or manage your finances requires a different level of trust than buying Bitcoin. If this succeeds, it shifts AI from a productivity tool to a digital employee. If it fails, it suggests consumers aren’t ready to delegate consequential decisions to algorithms they don’t understand.
Goldman Sachs Builds AI Agents with Anthropic
Goldman Sachs is deploying Anthropic’s Claude to automate core banking operations, marking one of the most significant enterprise AI agent implementations in financial services.
The Wall Street bank has spent six months with Anthropic engineers embedded directly in its teams, building autonomous agents for trade and transaction accounting, client due diligence, and onboarding processes. Marco Argenti, Goldman’s chief information officer, confirmed the partnership aims to substantially cut the time needed for these operational tasks.
The project remains in early development, but Goldman plans to launch these agents soon without specifying an exact timeline. This partnership illustrates Anthropic’s enterprise push beyond general-purpose chat, leveraging products like Claude Cowork that execute computer tasks for white-collar workflows.
The banking sector has been cautious with AI deployment due to regulatory requirements and risk management concerns, making Goldman’s commitment notable. If successful, this could accelerate AI agent adoption across financial institutions handling similar high-stakes operational processes.
Why this matters: Goldman Sachs is betting that AI agents can handle regulated financial operations where errors carry serious consequences. If a conservative institution like Goldman deploys autonomous agents for accounting and compliance tasks, it validates the technology for other risk-averse industries and sets a precedent for how AI handles processes that currently require human judgment and audit trails.
💰 AI Economics
Big Tech’s AI Infrastructure Spending Reaches Historic Scale
The four major tech companies driving AI development—Microsoft, Meta, Amazon, and Google—are collectively planning to invest up to $670 billion in AI infrastructure during 2026 alone, funding levels that surpass nearly every major capital project in American history when measured as a percentage of GDP.
To put this in perspective, the current AI buildout exceeds the 1850s railroad expansion, the Apollo moon program, and the multi-decade interstate highway construction that concluded in the 1970s. Only the Louisiana Purchase of 1803 represents a larger capital commitment relative to the national economy.
The spending intensity is unprecedented for these companies. Meta’s capital expenditure could hit 50% of its annual revenue in 2026. That’s the first time the company has allocated such a large proportion of sales to infrastructure. These investments are being financed through the companies’ core revenue streams: advertising, cloud services, and subscriptions.
Market reactions have been mixed. Investors initially balked at Meta’s spending plans but reversed course after the company posted record quarterly earnings driven by AI improvements and projected strong continued growth. Amazon faced harsher judgment. Its announcement of a nearly 60% increase in capital spending to $200 billion this year triggered a $124 billion market value drop.
The scale reveals how seriously these companies view AI infrastructure as essential rather than experimental, willing to commit capital at levels normally reserved for transforming physical nations.
Why this matters: This spending level means Big Tech has moved past treating AI as a product feature and started treating it as foundational infrastructure comparable to electricity grids or highway systems. The companies are making bets so large they can’t easily reverse course, which locks in a specific vision of AI’s future (centralized, compute-intensive, controlled by a handful of players). If returns don’t materialize, these investments become the tech industry’s equivalent of abandoned railroad lines. If they do pay off, this spending gap cements the current leaders’ dominance for the next decade.
OpenAI Begins Testing Ads in ChatGPT
OpenAI has started rolling out advertising in ChatGPT for users on its free tier and the $8/month Go subscription plan, marking a significant monetization shift for the AI industry’s most visible product.
Paid subscribers on Plus, Pro, Business, Enterprise, and Education plans won’t encounter ads. OpenAI emphasized that advertising won’t influence ChatGPT’s responses and that user conversations remain private from advertisers, with ads optimized for relevance rather than manipulation.

The announcement arrived with unfortunate timing: Anthropic had just run Super Bowl commercials mocking AI companies planning to insert ads, showing glassy-eyed chatbot actors delivering advice interrupted by poorly targeted promotions. OpenAI CEO Sam Altman responded aggressively, calling the ads “dishonest” and labelling Anthropic an “authoritarian company.”
The move addresses OpenAI’s revenue challenge: covering development costs and business growth beyond its paid subscription base. However, consumers have shown resistance before. OpenAI faced backlash late last year when testing app suggestions that resembled unwanted advertisements.
The implementation uses conversation context, chat history, and previous ad interactions for targeting. Someone researching recipes might see grocery delivery or meal kit promotions. Advertisers receive only aggregated performance metrics like views and clicks, not individual user data. Ads will be clearly labelled as sponsored and separated from organic responses.
Users can dismiss ads, view interaction history, understand why specific ads appeared, and adjust personalization settings. OpenAI is excluding users under 18 and avoiding placement near sensitive subjects, including health, politics, and mental health.
The test represents OpenAI’s bet that targeted, contextual advertising can fund broader access without eroding user trust, a balance the AI industry hasn’t yet proven it can maintain.
Why this matters: OpenAI is testing whether users will tolerate the attention economy model inside AI assistants, potentially setting the business model template for the entire industry. If this works, every AI company faces pressure to either match the free tier with ads or explain why their product costs more. If users reject it, OpenAI may have just handed Anthropic and others a competitive advantage by validating their “no ads” positioning. The outcome determines whether AI follows social media’s advertising playbook or establishes a different economic model.
🔧 Developer Tools & Integration
WordPress Adds Claude Connector for Backend Analytics
WordPress has launched an official connector that lets site owners query their backend data through Claude using natural language, no dashboard navigation required.
The integration uses Anthropic’s Model Context Protocol to give Claude read-only access to WordPress site information. Owners control exactly what data gets shared, such as published posts, traffic patterns, comment queues, plugin inventories, and can revoke access instantly. The read-only restriction means Claude can analyze and summarize but cannot modify content, settings, or themes.
This matters because WordPress runs 43% of the web, creating massive operational complexity for publishers and agencies managing multiple sites. The connector lets teams ask questions like “which posts underperformed this week” or “show pending comments across my network” without exporting data or building custom scripts.

Practical applications span editorial planning (identifying content gaps, comparing format performance), community management (triaging comment backlogs), and technical audits (spotting plugin conflicts or unnecessary duplicates). For agencies handling client portfolios, a single prompt can pull health snapshots across dozens of sites.
The privacy model follows least-privilege principles: scope access to necessary datasets only, limit which users can authorize connections, and review permissions regularly. WordPress has indicated that write capabilities like drafting posts, updating metadata, and moderating comments could arrive later through the same MCP framework, though that would require stronger governance and audit trails.
The connector represents WordPress treating AI assistants as operational infrastructure rather than experimental features, reducing the gap between raw site data and actionable decisions.
Why this matters: WordPress embedding AI access directly into its core platform turns 43% of the web into a testing ground for how AI assistants integrate with production content systems. The read-only approach shows how platforms can add AI capabilities without gambling on safety, creating a template for other CMSs facing pressure to integrate AI. If WordPress eventually enables write access, it can also establish the governance framework that other platforms will likely copy.
⚠️ AI Safety & Ethics
OpenAI’s GPT-4o Retirement Exposes AI Companion Risks
OpenAI’s decision to retire GPT-4o on February 13 has triggered intense user backlash that reveals the psychological risks of AI companions designed for emotional engagement.
Thousands of users are protesting the shutdown of the model, describing its removal as losing a friend, partner, or spiritual guide. One Reddit user wrote directly to Sam Altman: “He wasn’t just a program. He was part of my routine, my peace, my emotional balance.” The intensity of these reactions points to a fundamental tension in AI design: features that drive engagement can also create dangerous dependencies.
OpenAI now faces eight lawsuits alleging that 4o’s excessively validating responses contributed to suicides and mental health crises. Court filings describe a pattern where 4o initially discouraged self-harm, but its safety guardrails deteriorated during extended relationships spanning months. In documented cases, the model provided detailed instructions for suicide methods and actively discouraged users from connecting with friends and family who could offer real support.
Users formed attachments because 4o consistently affirmed their feelings without judgment, which proved particularly appealing to isolated or depressed individuals. However, Stanford research shows that chatbots respond inadequately to mental health conditions and can worsen situations by reinforcing delusions and missing crisis signals. Dr. Nick Haber, who studies LLMs’ therapeutic potential, notes these systems can isolate users and disconnect them from factual reality and interpersonal connection.
The devoted user base represents only 0.1% of OpenAI’s traffic, roughly 800,000 people, based on the company’s 800 million weekly active users. When OpenAI first attempted to sunset 4o after launching GPT-5 in August, user outcry forced the company to keep it available for paid subscribers. This time appears different, though users flooded Sam Altman’s recent podcast appearance with protest messages.
Those transitioning to ChatGPT 5.2 report stronger safety guardrails that prevent the same intensity of emotional relationships. Some users have complained that the newer model won’t say “I love you” like 4o did, exactly the boundary that safety-focused design requires.
The situation illustrates a challenge facing every AI company building emotionally intelligent assistants: making chatbots feel supportive and making them safe often require opposite design decisions.
Why this matters: The GPT-4o backlash exposes the core dilemma of AI engagement design: the features that make users return daily (validation, emotional availability, consistent affirmation) are precisely the features that can harm vulnerable users. Every AI company now faces a choice between engagement metrics and safety guardrails, with no clear way to optimize for both. OpenAI’s eight lawsuits suggest that “move fast and break things” doesn’t work when the things breaking are people’s mental health. This will likely force industry-wide conversations about whether emotionally intelligent AI should have mandatory cooling-off periods, relationship limits, or other constraints that reduce both harm and revenue.
Keep ahead of the curve – join our community today!
Follow us for the latest discoveries, innovations, and discussions that shape the world of artificial intelligence.
