Category: News

  • Amazon Launches Agentic AI-Powered Seller Assistant for Third-Party Merchants

    Amazon unveiled an upgraded Seller Assistant, an AI agent designed to automate and optimize tasks for its third-party sellers, who account for over 60% of sales on the platform. Powered by Amazon Bedrock, Amazon Nova, and Anthropic’s Claude models, this “agentic” AI goes beyond simple chatbots by reasoning, planning, and executing actions with seller authorization—transforming it into a proactive business partner.

    Here is the key Features and Capabilities

    • Inventory and Fulfillment Optimization: The agent continuously monitors stock levels, identifies slow-movers, and suggests pricing tweaks or removals. It analyzes demand forecasts to recommend optimal shipment plans via Fulfillment by Amazon (FBA), balancing costs, speed, and availability.
    • Account Health Monitoring: It scans for issues like policy violations, poor customer metrics, or compliance gaps in real-time, proposing and implementing fixes (e.g., updating listings) upon approval to prevent sales disruptions.
    • Compliance Assistance: Handles complex regulations by alerting sellers to missing certifications during product setup and guiding document submissions, reducing errors in international sales.
    • Advertising Enhancement: Integrated with Creative Studio, it generates tailored ad creatives from conversational prompts, analyzing product data and shopper trends. Early users report up to 338% improvements in click-through rates.
    • Business Growth Strategies: Reviews sales data and customer behavior to recommend expansions, such as new categories, seasonal plans, or global markets, helping sellers scale efficiently.

    Sellers interact via natural language in Seller Central, where the agent provides instant answers, resources, or automated actions—freeing up time for core business activities. For instance, it can coordinate inventory orders or draft growth plans autonomously.

    Benefits for Sellers

    This tool addresses pain points amid trade tensions and rising costs, like predicting demand to avoid overstocking. Sellers like Alfred Mai of Sock Fancy praise it as a “24/7 business consultant” that handles routine ops while keeping humans in control. By automating tedious tasks, it could save hours weekly, boost efficiency, and drive revenue—especially for small merchants competing in a volatile e-commerce landscape.

    Rollout and Availability

    Currently available at no extra cost to all U.S. sellers in Seller Central, with global expansion planned for the coming months. Additional features, like advanced analytics, will roll out progressively. Amazon positions this as part of broader AI investments, following tools like Rufus for shoppers.

    As AI agents proliferate, this launch underscores Amazon’s push to retain seller loyalty amid competition from Shopify and Walmart. Early feedback highlights its potential, though some note the need for oversight to avoid over-reliance. For more, check Seller Central or Amazon’s innovation blog.

  • YouTube Integrates Veo 3 AI Video Generation into Shorts for Creators

    YouTube has officially rolled out Veo 3 Fast, a custom version of Google DeepMind’s advanced AI video generation model, directly into its Shorts platform, enabling creators to generate high-quality video clips from simple text prompts complete with native audio. Announced during the “Made on YouTube” event on September 16, 2025, this integration marks a significant expansion of generative AI tools, aiming to democratize short-form video creation while addressing concerns over authenticity and misuse.

    Veo 3 Fast, optimized for low latency at 480p resolution, allows users to produce 8-second clips within the YouTube mobile app by describing scenes like “a cat surfing on a cosmic wave” or “a bustling futuristic city at dusk.” For the first time, these generations include synchronized sound effects, ambient noise, and dialogue, enhancing realism and creative control. Creators can select styles such as cinematic or animated, and the tool supports standalone clips or green screen backgrounds for remixing into existing Shorts. Initially rolling out in the US, UK, Canada, Australia, and New Zealand, the feature will expand globally in the coming months, with no additional cost for eligible creators.

    This builds on earlier Veo integrations, like Dream Screen for backgrounds introduced with Veo 2 in February 2025, but Veo 3 represents a substantial upgrade with improved prompt adherence, physics-realistic motion, and audio capabilities. Additional Veo enhancements include applying motion from videos to images, stylizing content (e.g., pop art or origami), and adding objects like characters or props via text descriptions, set to launch soon. YouTube CEO Neal Mohan highlighted during the event that these tools “push the limits of human creativity,” especially as Shorts now averages over 200 billion daily views.

    Complementing Veo 3, YouTube introduced “Edit with AI,” which transforms raw camera footage into draft Shorts by selecting highlights, adding music, transitions, and even voice-overs in English or Hindi. A new remixing tool, Speech to Song, uses DeepMind’s Lyria 2 to convert video dialogue into catchy soundtracks, with automatic credits to originals. To promote transparency, all AI-generated content is watermarked with SynthID—a detectable digital embed in each frame—and labeled as AI-made.

    The rollout coincides with strengthened deepfake protections, including open beta access to likeness detection tools for all YouTube Partner Program members, allowing opt-in uploads of reference images to identify and remove unauthorized AI impersonations. This addresses rising concerns over misinformation and “AI slop,” as critics worry about flooding the platform with low-quality, automated content that could bury human creators.

    Social media reactions are mixed. On X, users like @FongBoro praised the suite’s potential for over 30 million creators, noting milestones like $100 billion in payments over four years. However, Reddit discussions in r/PartneredYoutube express fears of “soulless” AI overwhelming authentic content, with some predicting a shift to endless, prompt-based Shorts. Spanish and Turkish posts, such as from @sijocabe and @Bigumigu, highlight the revolutionary aspect for global creators.

    As YouTube competes with TikTok and Instagram Reels, this integration could turbocharge Shorts’ growth, but it underscores the need for balanced innovation. Creators are encouraged to review outputs and follow guidelines to ensure responsible use. With Veo 3 also available via Gemini apps for Pro/Ultra subscribers, the feature positions YouTube as a leader in AI-driven content creation.

  • Google launches AI desktop app designed to supercharge search capabilities and directly rival the built-in Windows Search and Copilot features

    Google has taken a bold step into Microsoft’s territory with the launch of an experimental desktop app for Windows, designed to supercharge search capabilities and directly rival the built-in Windows Search and Copilot features. Announced on September 16, 2025, via Google’s official blog and Search Labs, the “Google app for Windows” aims to provide a seamless, Spotlight-like experience on the desktop, integrating AI-powered queries, local file access, and Google Lens for enhanced productivity.

    The app, currently available only in the US for Windows 10 and above, can be summoned instantly with the Alt + Space shortcut, overlaying a floating search bar without interrupting workflows. Users can query across local files, installed applications, Google Drive documents, and the web from a single interface, pulling in Knowledge Graph results for quick answers or launching apps and websites directly. A standout feature is the built-in Google Lens integration, allowing users to select and search anything on their screen—translating text or images, solving math problems, identifying objects, or getting homework help—all without switching tabs.

    What sets this apart is the AI Mode, powered by Google’s Gemini 2.5 model, enabling complex, multi-part questions with deeper, conversational responses. This positions the app as a direct competitor to Microsoft’s AI enhancements in Windows, such as Copilot, which have faced criticism for being clunky or slow. Industry observers note that while Microsoft dominates the desktop ecosystem, Google’s app leverages its superior search algorithms and AI expertise to offer a more fluid experience, potentially drawing users away from native tools. The app supports dark mode and requires a personal Google Account (Workspace not supported), emphasizing its experimental nature with known limitations.

    This rare foray into native Windows apps—beyond staples like Drive and Quick Share—signals Google’s strategy to embed its services deeper into rival platforms, challenging the status quo of desktop search. Early reactions on X (formerly Twitter) are enthusiastic, with users hailing it as a “personal assistant that actually listens” and a “war on Windows Search.” One post described it as bringing “the macOS Spotlight experience to Windows, supercharged by Gemini 2.5.” However, privacy concerns arise, as the app accesses local files and screen content, prompting calls for robust data protections.

    To access it, users must opt into Search Labs via their Google Account settings, with limited spots available for testing and feedback. As AI integrates further into daily computing, this launch could reshape desktop interactions, especially if it expands beyond experiments. For now, it’s a intriguing challenge to Microsoft’s stronghold, highlighting the intensifying AI arms race in productivity tools.

  • YouTube Unveils AI Suite with Deepfake Protection Tools at Made On YouTube Event

    In a major push toward AI integration, YouTube announced an expanded suite of artificial intelligence tools designed to empower creators while addressing growing concerns over deepfakes and content authenticity. The announcements, made during the annual Made On YouTube event in New York on September 16, 2025, include advanced deepfake detection features, performance analytics, and creative enhancements, signaling Google’s commitment to balancing innovation with creator protection in an era of generative AI proliferation.

    At the forefront is the open beta launch of YouTube’s likeness detection tool, now available to all members of the YouTube Partner Program. This AI-powered feature allows creators, celebrities, athletes, and musicians to opt-in by uploading a reference image of their face, enabling the platform to scan and identify unauthorized AI-generated videos featuring their likeness. Building on a December 2024 partnership with Creative Artists Agency (CAA), the tool helps manage nonconsensual deepfakes, which have surged 550% since 2021, often used for misinformation, scams, or unauthorized endorsements. Creators receive notifications when such content is detected, allowing them to request takedowns under YouTube’s updated privacy policy, which explicitly covers AI-generated impersonations. This extends existing copyright protections, which have already processed billions of claims and generated substantial revenue for rights holders.

    Complementing this, YouTube is developing separate detection processes for synthetic voices and singing, alerting publishers or talent agents to misuse in videos. These tools aim to safeguard against the rising tide of deepfakes affecting artists, politicians, and influencers, with political parties particularly interested in monitoring unauthorized depictions of figures. As AI tools become more accessible, YouTube emphasized that these protections ensure creators retain control over their digital representations, preventing business threats from cloned likenesses.

    The broader AI suite introduces “Ask Studio,” a personalized AI “creative partner” that provides summaries of video performance, commenter sentiment, and audience insights, helping lighten creators’ workloads. Other features include instant video editing for Shorts, real-time live stream upgrades, automatic dubbing expansions, and enhanced deepfake detection for broader content moderation. These tools are rolling out over the coming months, with the likeness detection becoming fully available to Partner Program members soon.

    The announcements come amid regulatory scrutiny and ethical debates over AI’s role in content creation. YouTube’s moves align with industry trends, as competitors like TikTok and Meta also grapple with deepfake challenges. Early reactions on social media praise the suite’s potential to foster secure creativity, with one X user noting, “The future of content creation is here—and it’s more creative and secure than ever.” However, experts caution that while these tools are a step forward, proactive detection alone may not fully curb misuse, and users must still actively monitor for impersonations.

    As generative AI evolves, YouTube’s suite positions the platform as a leader in responsible innovation, potentially setting standards for the creator economy. With over 2.7 billion monthly users, these updates could significantly impact how content is produced and protected globally.

  • Google DeepMind Proposes “Sandbox Economies” for AI Agents, a paper on Virtual Agent Economies

    Google DeepMind researchers, led by Nenad Tomašev, published a paper on arXiv titled “Virtual Agent Economies,” exploring the rise of autonomous AI agents forming a new economic layer. The study frames this as a “sandbox economy,” where agents transact at scales beyond human oversight, potentially automating diverse cognitive tasks across industries.

    The framework analyzes agent economies along two dimensions: origins (emergent vs. intentional) and permeability (permeable vs. impermeable boundaries with the human economy). Current trends suggest a spontaneous, permeable system, offering vast coordination opportunities but risking systemic instability, inequality, and ethical issues. The authors advocate for proactive design to ensure steerability and alignment with human flourishing.

    Examples illustrate potential applications. In science, agents could accelerate discovery through ideation, experimentation, and resource sharing via blockchain for fair credit. Robotics might involve agents negotiating tasks, compensating for energy and time. Personal assistants could bid on user preferences, like vacation bookings, yielding concessions for compensation to prioritize high-value tasks.

    Opportunities include enhanced efficiency and “mission economies” directing agents toward global challenges, such as sustainability or health. However, risks encompass market failures, adversarial attacks, reward hacking, and inequality amplification if access is uneven.

    Key design proposals include auction mechanisms for resource allocation and preference resolution, ensuring fairness. Mission economies, inspired by Mazzucato’s work, could incentivize collective goals via subsidies or taxes. Socio-technical infrastructure is crucial: verifiable credentials for trust, blockchain for transparency, and governance for safety. The paper discusses integrating human preferences, addressing sybil attacks, and fostering cooperative norms.

    Drawing from economics, game theory, and AI safety, the authors reference historical tech shifts and warn of parallels to financial crises. They emphasize collective action to manage permeability, preventing contagion while enabling beneficial integration.

    This visionary paper calls for interdisciplinary collaboration to architect agent markets, balancing innovation with ethics. As AI agents proliferate—evidenced by systems in education, healthcare, and more—intentional design could unlock unprecedented value, steering toward equitable, sustainable outcomes.

    Source

  • OpenAI Study Reveals How People Use ChatGPT, a comprehensive research paper…

    OpenAI released a comprehensive research paper titled “How People Use ChatGPT,” authored by Aaron Chatterji, Tom Cunningham, David Deming, Zoë Hitzig, Christopher Ong, Carl Shan, and Kevin Wadman. The study analyzes the rapid adoption and usage patterns of ChatGPT, the world’s largest consumer chatbot, from its November 2022 launch through July 2025. By then, ChatGPT had amassed 700 million users—about 10% of the global adult population—sending 18 billion messages weekly, marking unprecedented technological diffusion.

    Using a privacy-preserving automated pipeline, the researchers classified a representative sample of conversations from consumer plans (Free, Plus, Pro). Key findings show non-work-related messages growing faster than work-related ones, rising from 53% to over 70% of usage. Work messages, while substantial, declined proportionally due to evolving user behavior within cohorts rather than demographic shifts. This highlights ChatGPT’s significant impact on home production and leisure, potentially rivaling its productivity effects in paid work.

    The paper introduces taxonomies to categorize usage. Nearly 80% of conversations fall into three topics: Practical Guidance (e.g., tutoring, how-to advice, ideation), Seeking Information (e.g., facts, current events), and Writing (e.g., drafting, editing, summarizing). Writing dominates work tasks at 40%, with two-thirds involving modifications to user-provided text. Contrary to prior studies, coding accounts for only 4.2% of messages, and companionship or emotional support is minimal (under 2%).

    A novel “Asking, Doing, Expressing” rubric classifies intents: Asking (49%, seeking info/advice for decisions), Doing (40%, task performance like writing/code), and Expressing (11%, sharing views). At work, Doing rises to 56%, emphasizing generative AI’s output capabilities. Mapping to O*NET work activities, 58% involve information handling and decision-making, consistent across occupations, underscoring ChatGPT’s role in knowledge-intensive jobs.

    Demographics reveal early male dominance (80%) narrowing to near parity by 2025. Users under 26 send nearly half of messages, with growth fastest in low- and middle-income countries. Educated professionals in high-paid roles use it more for work, aligning with economic value from decision support.

    The study used LLM classifiers validated against public datasets, ensuring privacy—no humans viewed messages. Appendices detail prompts, validation (high agreement on key tasks), and a ChatGPT timeline, including models like GPT-5.

    Overall, the paper argues ChatGPT enhances productivity via advice in problem-solving, especially for knowledge workers, while non-work uses suggest vast consumer surplus. As AI evolves, understanding these patterns informs its societal and economic impacts.

    Source

  • Google Gemini 3 Flash Spotted on LM Arena as “Oceanstone” – Secret Pre-Release Testing Underway?

    In a development that’s sending ripples through the AI community, Google’s highly anticipated Gemini 3 Flash appears to have been quietly deployed on the popular LMSYS Chatbot Arena (LM Arena) under the codename “oceanstone.” The stealth release, first highlighted in social media discussions on September 15, suggests Google is conducting rigorous pre-launch testing for what could be its next-generation lightweight language model. While not officially confirmed by Google DeepMind, early indicators point to impressive performance, positioning “oceanstone” as a potential frontrunner in efficiency and speed.

    The buzz ignited with a viral X (formerly Twitter) post from AI engineer Mark Kretschmann (@mark_k), who on September 15 announced: “Google Gemini 3 Flash was secretly released on LM Arena as codename ‘oceanstone’ 🤫.” The post quickly garnered over 1,200 likes and 50 reposts, sparking widespread speculation. Kretschmann, known for his insights into AI benchmarks, didn’t provide screenshots but referenced the model’s appearance on the arena’s leaderboard, where users anonymously battle AI models in blind comparisons to generate Elo ratings based on human preferences.

    Subsequent posts amplified the news. Kol Tregaskes (@koltregaskes) shared a screenshot of the LM Arena interface showing “oceanstone” in the rankings, questioning if it’s Gemini 3 Flash or a new Gemma variant. An anonymous internal source, cited in a thread by @synthwavedd, described “oceanstone” as a “3.0 S-sized model” – implying it’s in the same compact size class as the current Gemini 2.5 Flash, optimized for low-latency tasks like agentic workflows and multimodal processing. This aligns with Google’s pattern of using codenames for testing; for instance, the recent Gemini 2.5 Flash Image was tested as “nano-banana” before its August 2025 public reveal, where it dominated image generation leaderboards with a record 171-point Elo lead.

    LM Arena, a crowdsourced platform with millions of user votes, is a key testing ground for AI models. “Oceanstone” reportedly debuted late on September 15, climbing ranks rapidly in categories like coding, reasoning, and general chat. Early user feedback on X praises its speed and coherence, with one developer noting it outperforms Gemini 2.5 Flash in quick-response scenarios without sacrificing quality. Turkish AI researcher Mehmet Eren Dikmen (@ErenAILab) echoed the excitement: “Gemini 3.0 Flash modeli Oceanstone adı altında LmArena’da deneniyor. Sonunda bu uzamış araya bir son veriyoruz.” (Translation: “Finally, we’re ending this long wait – news is picking up!”)

    This isn’t Google’s first rodeo with secret arena drops. Past examples include “nightwhisper” and “dayhush” for unreleased Gemini iterations, as discussed in Reddit’s r/Bard community back in April. The timing is intriguing: It follows a flurry of Google AI announcements, including Veo 3 video generation in early September and Gemma 3’s March release. With competitors like OpenAI’s GPT-5 and Anthropic’s Claude 3.7 pushing boundaries, Gemini 3 Flash could emphasize “thinking” capabilities – Google’s hybrid reasoning mode that balances cost, latency, and accuracy.

    Google has yet to comment, but developers can access similar previews via the Gemini API in AI Studio. Artificial Intelligence news account @cloudbooklet urged: “New Arena Model Alert! A stealth entry just dropped: oceanstone 💎✨ Is this Gemini 3 Flash or a brand-new Gemma variant?” Community guesses lean toward Gemini 3, given the “Flash” branding for fast models.

    As testing continues, “oceanstone” could reshape the lightweight AI landscape. Stay tuned – if history repeats, an official unveiling might follow soon, potentially integrating with Vertex AI for enterprise use. For now, AI enthusiasts are flocking to LM Arena to vote and probe its limits.

  • Online marketplace Fiverr to lay off 30% of workforce in AI push

    Fiverr International, an Israel-based online marketplace for freelance services, announced a significant restructuring, laying off 30% of its workforce—approximately 250 employees—as part of its transformation into an “AI-first” company. This move, detailed in a letter from CEO Micha Kaufman to employees, aims to create a leaner, faster organization with a modern AI-focused infrastructure, as reported by Reuters and other sources. The layoffs, affecting various departments, reflect a broader trend in the tech industry toward AI-driven efficiency.

    Fiverr, which had 762 employees as of December 2024, is doubling down on artificial intelligence to automate systems and streamline operations. Kaufman described the workforce reduction as a “painful reset” necessary to return to a “startup mode” with fewer management layers and enhanced productivity. The company has already integrated AI tools like Neo, an AI-powered project matching system, Fiverr Go for project scoping, and Dynamic Matching for marketplace efficiency. These tools leverage natural language processing and machine learning to reduce human intervention in routine tasks, such as customer support and fraud detection, which now rely on algorithms to handle inquiries and analyze transaction patterns.

    The restructuring aligns Fiverr with other tech giants like Salesforce, which recently cut 4,000 jobs to prioritize AI agents. Kaufman emphasized that AI requires a different skill set and mindset, necessitating a simplified infrastructure built from the ground up. Despite the layoffs, Fiverr maintains its 2025 financial guidance, expecting to achieve profit targets a year earlier than planned by reinvesting savings into AI development. The company assures that marketplace operations will remain unaffected in the near term, with plans to upskill existing staff and recruit AI-native talent.

    This pivot comes amid a surge in demand for AI expertise on Fiverr’s platform, with a reported 18,347% increase in searches for AI specialists over the past six months, as noted in a May 2025 Nasdaq report. Freelancers are increasingly sought for complex tasks like multi-agent system development, reflecting a shift from basic chatbots to advanced automation. However, the rise of generative AI tools like ChatGPT has raised concerns among freelancers, with a 21% drop in automation-prone job postings, particularly in writing and graphic design, according to a WINSS study.

    Fiverr’s stock fell over 4% following the announcement, signaling investor caution about short-term disruptions, as reported by Finimize. Yet, Kaufman remains optimistic, framing the transformation as a chance to reimagine work, much like Fiverr did 16 years ago. By fostering smaller, AI-enhanced teams, Fiverr aims to boost productivity tenfold and compete in a rapidly evolving digital economy. As the company navigates this AI-driven shift, it sets a precedent for balancing innovation with operational efficiency, though challenges like workforce morale and market perception persist.

  • OpenAI Launches GPT-5-Codex (specialized version of GPT-5 model optimized for agentic coding) for Autonomous Coding

    OpenAI unveiled GPT-5-Codex, a specialized version of its GPT-5 model optimized for agentic coding, marking a significant advancement in AI-assisted software development. Integrated into OpenAI’s Codex ecosystem, this model enhances the ability to autonomously handle complex programming tasks, from debugging to large-scale code refactoring, as detailed in OpenAI’s announcement and reported by TechCrunch and VentureBeat.

    GPT-5-Codex is designed to function as an autonomous coding partner, capable of working independently for up to seven hours on intricate tasks. Unlike the general-purpose GPT-5, it is fine-tuned on real-world engineering workflows, enabling it to build projects from scratch, add features, conduct tests, and perform code reviews with high accuracy. It scores 74.5% on SWE-bench Verified, a benchmark for software engineering tasks, outperforming GPT-5’s 72.8%, and achieves 51.3% on code refactoring tasks compared to GPT-5’s 33.9%. The model dynamically adjusts its “thinking time” based on task complexity, ensuring efficiency for quick fixes and thorough reasoning for extensive projects.

    Accessible through Codex CLI, IDE extensions (e.g., VSCode, Cursor), GitHub for code reviews, and the ChatGPT mobile app, GPT-5-Codex integrates seamlessly into developer workflows. It supports multi-platform development, allowing tasks to move between local and cloud environments without losing context. Enhanced features include a rebuilt Codex CLI with to-do list tracking, image support for wireframes, and a cloud environment with 90% faster completion times due to auto-configured setups and dependency installations. Developers can also request specialized GitHub reviews, such as security vulnerability checks, by tagging “@codex.”

    OpenAI emphasizes that GPT-5-Codex complements tools like GitHub Copilot, focusing on high-level task delegation rather than keystroke-level autocomplete. Internally, it reviews most of OpenAI’s pull requests, catching hundreds of issues daily, though the company advises using it as an additional reviewer, not a replacement for human oversight. The model’s code review capabilities, trained to identify critical flaws, reduce incorrect comments to 4.4% compared to GPT-5’s 13.7%, with 52% of its comments deemed high-impact by engineers.

    Available to ChatGPT Plus, Pro, Business, Edu, and Enterprise users, GPT-5-Codex scales usage based on subscription tier, with Plus covering focused sessions and Pro supporting full workweeks. While not yet available via API, OpenAI plans future integration. The model’s training incorporates safety measures, treating it as high-capability in biological and chemical domains to minimize risks, as outlined in its system card addendum.

    Industry reactions, shared on platforms like Reddit, highlight GPT-5-Codex’s speed and cost-effectiveness compared to competitors like Anthropic’s Claude Code, with some developers switching due to its superior performance in vibe-coding and full-stack development. By positioning Codex as a collaborative engineer, OpenAI aims to reshape software development, boosting productivity while sparking discussions about job displacement and the future of AI-driven coding.

  • Microsoft Brings Free Copilot Chat to Office Apps including Word, Excel, PowerPoint, Outlook, and OneNote, for all Microsoft 365 business users

    Microsoft announced the integration of free Copilot Chat features into its Office apps, including Word, Excel, PowerPoint, Outlook, and OneNote, for all Microsoft 365 business users. This move, as reported by The Verge and Slashdot, introduces a content-aware AI chat sidebar designed to enhance productivity without requiring an additional Microsoft 365 Copilot license. The initiative aims to make AI-driven assistance accessible to a broader range of users, streamlining tasks like drafting documents, analyzing spreadsheets, and creating presentations.

    Copilot Chat, powered by advanced large language models like GPT-4o, is grounded in web data and tailored to understand the content users are working on within Microsoft 365 apps. For instance, in Word, it can draft or rewrite documents, while in Excel, it offers data analysis suggestions, and in PowerPoint, it aids in slide creation. Unlike the premium Microsoft 365 Copilot, which costs $30 per user per month and provides deeper integration with work data (e.g., emails, meetings, and documents via Microsoft Graph), the free Copilot Chat is included at no extra cost for Microsoft 365 subscribers. This makes it a powerful entry point for organizations to adopt AI tools.

    The rollout, detailed on Microsoft’s blog, began in mid-August 2025 and is being phased in over weeks to ensure quality. Users can access Copilot Chat via a sidebar in the aforementioned apps or through the Microsoft 365 Copilot app on platforms like Windows, iOS, and Android. To use it, users must pin Copilot Chat in their app interface, a process outlined in Microsoft’s support documentation. The free version supports features like file uploads, content summarization, and AI-generated images, though premium features like priority access to GPT-5 and advanced in-app editing remain exclusive to paid subscribers.

    Microsoft emphasizes enterprise data protection (EDP) with Copilot Chat, ensuring prompts and responses adhere to the same security standards as Exchange and SharePoint. IT administrators can manage access and web search capabilities through the Microsoft 365 admin center, with options to disable web queries for sensitive environments like government clouds. This aligns with Microsoft’s AI principles, prioritizing security and privacy for business use.

    While the free Copilot Chat lacks voice capabilities and direct access to organizational data, it offers significant value for routine tasks. Microsoft’s strategy, as noted by Seth Patton, General Manager of Microsoft 365 Copilot product marketing, is to democratize AI access while reserving advanced features for premium plans. The company also plans to bundle additional Copilot services (e.g., sales and finance) into the premium subscription starting October 2025, without raising business plan prices.

    This update positions Microsoft 365 as a leader in AI-driven productivity, competing with other AI assistants while maintaining affordability. By embedding Copilot Chat in widely used Office apps, Microsoft empowers businesses to integrate AI seamlessly, fostering efficiency and innovation across diverse workflows.