AI This Week: Invisible, International and Integrated

7 mins
Overhead view of a person using a laptop to design children's educational content, surrounded by notebooks, flashcards, colouring books, a schedule, and a teddy bear on a tidy desk.

This week’s developments reveal how artificial intelligence is simultaneously becoming more imperceptible in our daily interactions, more competitive on the global stage, more deeply woven into creative workflows, and potentially more intrusive in our digital lives. From undetected AI-generated content fooling thousands to ambitious Chinese models challenging Silicon Valley’s dominance, from creative tools embedding AI capabilities into professional software to controversial data collection strategies mimicking Big Tech’s playbook, these stories highlight the complex reality of AI in 2025.

Listen to AI Knowledge Stream

AI Knowledge Stream is an AI-generated podcast based on our editor team’s “AI This Week” posts. We leverage cutting-edge AI tools, including Google Notebook LM, Descript, and Elevenlabs, to transform written content into an engaging audio experience. While AI-powered, our team oversees the process to ensure quality and accuracy. We value your feedback! Let us know how we can improve to better meet your needs.

When AI Goes Unnoticed: The New Normal?

This week brought two fascinating stories of AI creations flying under the radar, suggesting we may have reached a turning point in the quality and seamlessness of AI-generated content.

Microsoft revealed that a minute-long Surface Pro and Surface Laptop advertisement released in January contained significant AI-generated elements that went completely undetected by viewers for nearly three months. The company’s design team used AI to help create scripts, storyboards, and various shots, particularly those with limited motion. While they acknowledged needing to correct occasional hallucinations and integrate AI content with real footage, Microsoft’s creative director estimated they “saved 90% of the time and cost” compared to traditional production methods. Despite being viewed over 40,000 times on YouTube, no commenters had speculated about AI involvement before Microsoft’s disclosure.

Person working at a neatly organized desk with a laptop, open notebooks featuring design sketches and menu planning, and a cup of coffee, viewed from above.
Featured Image: Microsoft

In a similar development, Australian radio station CADA has been broadcasting a four-hour segment called “Workdays with Thy” featuring an AI-generated host whose voice was created using ElevenLabs. The show has reportedly reached at least 72,000 listeners without disclosure that Thy is AI-generated, though the voice and likeness are modelled after a real employee in the company’s financial department. The revelation has sparked criticism from voice acting professionals who argue that listeners deserve transparency.

These cases raise important questions about disclosure requirements as AI-generated content becomes increasingly difficult to distinguish from human-created work. Are we entering an era where AI will routinely contribute to creative production without audiences knowing or needing to know? The implications for creative industries, consumer trust, and regulatory approaches could be significant.

China’s AI Ambitions: DeepSeek-R2 Challenges Western Dominance

China’s AI landscape is heating up with DeepSeek’s upcoming R2 model, positioned as a formidable challenger to Silicon Valley’s AI giants. Originally slated for May 2025, DeepSeek-R2 represents a significant advancement in China’s AI capabilities and strategy.

The model builds upon DeepSeek’s previous work with notable improvements in three key areas. First, it features enhanced multilingual reasoning that maintains consistent performance across Chinese, English, and several Asian languages, addressing a critical weakness in many Western models that excel primarily in English. Second, it expands on DeepSeek’s already strong coding capabilities, potentially surpassing specialized programming models while maintaining general-purpose functionality. Third, R2 introduces robust multimodal features that process text, images, audio, and basic video within a unified architecture.

What truly sets DeepSeek-R2 apart is its innovative training methodology. The company has developed proprietary techniques, including “Generative Reward Modelling” and “Self-Principled Critique Tuning, ” potentially enabling more efficient learning with less computational resources than its Western counterparts. This efficiency-focused approach allows DeepSeek to achieve competitive results without the massive computational budgets of companies like OpenAI or Anthropic.

Perhaps most intriguing is DeepSeek’s strategic positioning. The company has reportedly declined significant investment offers to maintain independence and research focus, prioritizing technological advancement over immediate commercialization. While many Western companies have become increasingly cautious about discussing artificial general intelligence, DeepSeek has been explicit about its AGI ambitions, viewing R2 as an important step toward that longer-term vision.

OpenAI Expands Image Generation Reach Through Major Design Tools

OpenAI has announced that its “natively multimodal model,” which powers ChatGPT’s upgraded image generator, will now be accessible to third-party developers through its API via “gpt-image-1.” This move comes after extraordinary initial adoption, with over 130 million users creating over 700 million images in just the first week following the feature’s launch in ChatGPT.

Rather than requiring people to visit dedicated AI platforms, ChatGPT’s image generator capabilities are being integrated directly into the creative software millions already use daily. The integration strategy extends well beyond the initially announced Adobe and Figma partnerships to include a diverse ecosystem of platforms like Airtable, Gamma, HeyGen, OpusClip, Quora, Wix, Photoroom, and Playground.

Each platform is implementing the technology for specialized use cases. Adobe’s ecosystem, including Firefly and Express apps, will incorporate OpenAI’s image generation capabilities alongside its existing AI tools. Figma is rolling out integrations that allow users to generate and edit images directly within their design workflows. Airtable is focusing on helping marketing teams manage asset workflows at scale, while Photoroom is leveraging the technology to power three new AI tools for e-commerce sellers to create professional product visuals.

Screenshot of an AI-powered image generation tool with a red panda in sunglasses and a hoodie displayed as the generated result. Options like "Square" aspect ratio and model selection are visible.
Featured Image: OpenAI

The partnerships extend into multiple industries, with companies like Canva exploring design enhancements, GoDaddy testing logo creation capabilities, HubSpot developing marketing collateral generation, and Instacart exploring food-related image creation for recipes and shopping lists.

Pricing follows a token-based model ($5 per million for text input tokens, $10 per million for image input tokens, and $40 per million for image output tokens), which translates to approximately $0.02 to $0.19 per generated image depending on quality level. The company maintains the same safety guardrails as ChatGPT’s image generation, including protections against harmful content and C2PA metadata in generated images.

By embedding these capabilities into established creative workflows, OpenAI is positioning its image generation technology as a fundamental creative infrastructure rather than just a standalone feature, potentially accelerating adoption while addressing the friction that often comes with switching between multiple specialized tools.

Perplexity’s Controversial Browser Plans Raise Privacy Concerns

In a surprising move that has raised eyebrows across the tech industry, Perplexity CEO Aravind Srinivas openly discussed plans to track users’ online activities through the company’s upcoming browser called Comet, scheduled for launch in May.

Speaking on the TBPN podcast last week, Srinivas revealed that one of the primary motivations for developing a browser is to collect comprehensive user data beyond what’s available through their AI search app. “We want to get data even outside the app to understand you better,” Srinivas explained, adding that the company is particularly interested in tracking personal activities like purchases, hotel bookings, restaurant visits, and general browsing habits.

This data collection strategy aims to build detailed user profiles for delivering “hyper-personalized” advertisements through Perplexity’s discover feed. Srinivas expressed confidence that users would accept this tracking in exchange for more relevant ads—a familiar argument that has sustained Google’s business model for years.

The timing of these comments is particularly noteworthy as they come while Google faces an antitrust battle with the U.S. Department of Justice, which has suggested Google might need to divest its Chrome browser business. Ironically, both Perplexity and OpenAI have expressed interest in acquiring Chrome should such a divestiture occur.

Perplexity is also expanding its mobile presence through partnerships with Motorola, which will pre-install the app on Razr series phones and is reportedly in discussions with Samsung, according to Bloomberg. These moves signal Perplexity’s broader ambitions to establish data collection touchpoints similar to those that have helped companies like Google, Meta, and even Apple build advertising businesses.

Keep ahead of the curve – join our community today!

Follow us for the latest discoveries, innovations, and discussions that shape the world of artificial intelligence.