Category: News

  • “Learning” mode for Claude Code (designed to help users learn coding interactively while collaborating with Claude)

    Anthropic has launched a new “Learning” mode for Claude Code. This mode is designed to help users learn coding interactively while collaborating with Claude. In this mode, Claude occasionally pauses and marks sections with a “#TODO” comment, prompting users to write code themselves, essentially acting like a coding mentor.

    There is also an “Explanatory” mode where Claude explains its reasoning process as it codes, helping users understand architectural choices, trade-offs, and best practices.

    This Learning mode was initially available only for Claude for Education users but is now accessible to all Claude.ai users through a new option in the style dropdown menu. The feature aims to promote deeper understanding and independent thinking for coders at different levels.

    Anthropic is also opening the ability for developers to create custom learning modes using Claude Code’s new Output Styles feature, allowing even more personalized learning experiences.

    This launch marks a step towards making AI a collaborative partner in coding education and practice rather than just a tool for getting direct answers.

  • Imagen 4, Google’s latest text-to-image AI model, is now generally available through the Gemini API, Google AI Studio, and integration with Google Workspace apps

    Imagen 4, Google’s latest text-to-image AI model, is now generally available. It offers significant improvements in image quality, particularly in rendering text accurately within images. Google has introduced three model variants for different use cases and pricing:

    • Imagen 4 Fast: For rapid image generation at $0.02 per image.
    • Imagen 4 Standard: The flagship model designed for most use cases, priced at $0.04 per image.
    • Imagen 4 Ultra: For highly precise image generation closely aligned with text prompts, priced at $0.06 per image.

    Imagen 4 is accessible through the Gemini API, Google AI Studio, and integration with Google Workspace apps like Slides, Docs, and Vids. It delivers faster performance and higher fidelity images compared to its predecessor Imagen 3, supporting photorealistic and abstract styles with detailed textures, accurate lighting, and clean text rendering.

    This availability enables developers, creators, and businesses to use Imagen 4 across a broad range of creative projects, from marketing designs to game art and editorial content.

    Additionally, Google uses non-visible SynthID watermarks in all images generated with Imagen 4 for traceability and responsible AI use.

    In short, Imagen 4 is fully available now with a new fast generation option, making it both high-quality and efficient for a wide spectrum of image generation needs.

  • Apple readies AI-powered smart home devices like Tabletop product, Intelligent cameras etc.

    Apple is preparing a major AI-driven expansion into smart home devices, signaling a significant push fueled by artificial intelligence to compete with Amazon, Google, and other established players.

    Here is the key elements of this plan include:

    • A tabletop robot codenamed J595, expected around 2027. This robot features an iPad-sized display mounted on a motorized arm capable of swiveling to follow users throughout a room. It acts as a virtual companion with an AI-driven personality, enhanced video call capabilities, and sophisticated Siri interaction powered by large language models for conversational and proactive assistance. This represents Apple’s ambitious entry into home robotics.

    • A smart display device codenamed J490, slated for release by mid-2026. It looks like a square iPad screen mounted on a wall and runs a new multi-user operating system called “Charismatic.” Unlike traditional Siri, this device expects users to interact mostly through voice commands with an upgraded Siri that offers personalized experiences using facial recognition and spatial sensing of household members.

    • A lineup of battery-powered home security cameras codenamed J450 with facial recognition and infrared sensors. These cameras automate household functions (e.g., turning lights on/off, playing personalized music) and compete directly with Amazon’s Ring and Google’s Nest. The cameras integrate tightly with Apple’s Home app and iCloud+ for secure video storage.

    • A revamped Siri assistant, internally codenamed Linwood, built from the ground up with large language model technology. This new Siri aims to transcend simple command-based interaction to become a conversational, proactive AI companion capable of handling complex, multi-step tasks across Apple’s ecosystem. It will power all these new smart home devices, enabling deeper automation and richer user experiences.

    • These devices and software are part of Apple’s strategic effort to diversify its business beyond traditional mobile devices, enhance its AI capabilities, and build a cohesive, AI-powered smart home ecosystem that leverages Apple’s strengths in privacy, hardware-software integration, and user experience design.

    • Apple is expected to showcase progress on this smart home ecosystem soon, with the smart display device potentially launching later this year (2025), and the more advanced robot arriving in 2027.

    This ambitious push is partly driven by critiques that Apple has lagged in generative AI and home automation compared to competitors, and it comes as the company seeks new growth avenues after slower innovation cycles in core products like iPhones and mixed reactions to other products like the Vision Pro headset.

    In summary, Apple’s AI-powered smart home devices include:

    • A smart wall-mounted display (J490) with a new OS focusing on multi-user voice interaction.
    • A robotic tabletop companion (J595) with a swiveling screen and advanced AI.
    • Intelligent battery-powered security cameras (J450) with automation capabilities.
    • A fundamentally redesigned, conversational Siri assistant (Linwood).

    Together, they form a holistic, AI-centered smart home vision targeting releases between late 2025 and 2027, elevating Apple’s footprint in home automation and AI-driven interactions.

  • Oracle partners with Google Cloud to offer Gemini AI models,a strategic partnership

    Oracle and Google Cloud have announced a strategic partnership to offer Google’s Gemini AI models directly to Oracle customers through Oracle Cloud Infrastructure (OCI).

    Here is the key points of this collaboration include:

    • The integration will enable Oracle Cloud users and enterprise application customers to access Gemini’s advanced AI capabilities in text, image, video, and audio generation as part of Oracle’s Generative AI service.
    • Customers can utilize these AI tools using their existing Oracle cloud credits, streamlining procurement and billing.
    • This partnership expands upon previous multicloud efforts between Oracle and Google Cloud, such as running Oracle Database services in Google data centers and enabling low-latency cross-cloud transfers.
    • Oracle will provide a selection of AI model choices from multiple providers, including Gemini models, to avoid dependency on any single proprietary technology.
    • Google Cloud benefits by deepening its reach into the enterprise market, competing with other cloud vendors by embedding generative AI into critical business systems.
    • Initial offerings will start with Gemini 2.5 models, extendable to the full Gemini range, including capabilities for advanced coding, productivity automation, research, and multimodal understanding.
    • Future plans include integrating Gemini AI models into Oracle Fusion Cloud Applications for finance, HR, supply chain, sales, service, and marketing workflows.
    • Oracle customers will be able to build AI-powered agents and benefit from Google Gemini’s features like up-to-date grounding in Google Search data, large context windows, robust encryption, and enterprise-grade security.
    • This move aligns with a trend where enterprises seek multicloud approaches to deploy best-of-breed AI technologies and improve mission-critical systems.

    Top Oracle officials highlight that the partnership emphasizes providing powerful, secure, and cost-effective AI solutions to accelerate innovation while keeping AI technologies close to customer data with a focus on security and scalability. Google Cloud CEO Thomas Kurian remarked that Gemini models will support a wide range of enterprise use cases and make AI deployment easier for Oracle users.

    Overall, this alliance positions Oracle as a significant distributor of Google’s Gemini AI technology in enterprise cloud environments while enhancing Google Cloud’s footprint in AI-powered enterprise solutions.

  • Trump administration weighs government stake in Intel

    The Trump administration is currently in talks with Intel about the U.S. government potentially taking an equity stake in the semiconductor company. These discussions follow a recent meeting between President Trump and Intel CEO Lip-Bu Tan. The potential government investment aims to support Intel’s effort to expand its domestic manufacturing, specifically focusing on its much-delayed manufacturing hub in Ohio, which Intel had intended to become the world’s largest chipmaking facility, but has faced multiple delays.

    The exact size of the government’s stake is not clear, and the plans remain fluid as negotiations continue. The proposed government investment would involve direct financial support to Intel, which is facing challenges including delays, financial difficulties, and increased competition. After news of these talks, Intel’s stock surged approximately 7%.

    This move represents an unusual and significant direct government intervention in private enterprise, reflecting broader strategic goals to strengthen domestic semiconductor manufacturing amid geopolitical concerns and supply chain vulnerabilities. Intel is seen as a key player because it is the only major U.S. company capable of producing advanced chips domestically, a critical factor in reducing reliance on foreign suppliers.

    Despite previous tensions, including President Trump’s public calls for CEO Tan’s resignation due to alleged conflicts of interest involving Chinese investments, the meeting and subsequent talks signal a renewed collaboration effort aimed at bolstering the U.S. semiconductor industry.

  • AI start-up Perplexity makes $34.5bn bid for Google Chrome

    Perplexity AI, an artificial intelligence startup valued at about $18 billion as of mid-2025, made a surprise unsolicited all-cash offer of $34.5 billion to acquire Google’s Chrome web browser. This bid is nearly double Perplexity’s own valuation.

    Here is the key points about this acquisition bid:

    • The offer aims to buy Google’s Chrome browser amid a pending U.S. antitrust ruling that could force Google to divest Chrome as part of remedies against Google’s monopoly in search.
    • Perplexity pledged to keep the Chromium engine that Chrome is based on open source and promised to invest around $3 billion in its continued development.
    • Despite acquiring Chrome, Perplexity said it would keep Google as the default search engine on Chrome rather than replacing it with its own AI-powered search.
    • The move is also seen as a strategy by Perplexity to gain immediate access to billions of users and vast behavioral data to compete more effectively in AI-driven search and browsing markets.
    • Google has responded by calling the proposed sale “wildly overbroad” and asserted it would hurt consumers and security, planning to contest any such forced divestiture.
    • Perplexity recently launched its own AI-powered web browser called Comet that integrates an AI assistant more deeply than Chrome’s current AI features.
    • The U.S. Department of Justice has pushed for Google to divest Chrome as part of its antitrust resolution, as Chrome is viewed as a critical search access point reinforcing Google’s dominance.
    • Several investors have committed to backing Perplexity’s ambitious offer, although financing details remain private.
    • Industry analysts consider Google unlikely to sell Chrome voluntarily, expecting a lengthy legal battle over this issue.

    Perplexity’s bold $34.5 billion bid to acquire Google Chrome represents a major challenge to Google’s dominance in web browsing and search amid growing regulatory pressure. The deal would dramatically accelerate Perplexity’s growth and influence by granting it control over the world’s most popular browser and its vast user base—if it succeeds despite Google’s resistance and the complex legal environment.

  • Anthropic hires HumanLoop execs in deal to boost enterprise AI offerings

    Anthropic has acquired Humanloop’s three co-founders—CEO Raza Habib, CTO Peter Hayes, and CPO Jordan Burgess—along with most of its engineering and research team in an acqui-hire deal aimed at strengthening Anthropic’s enterprise AI tooling and safety capabilities. This move does not include acquiring Humanloop’s assets or intellectual property, which is increasingly irrelevant in AI where critical IP often resides in the expertise of the people themselves.

    Humanloop was a UK-based AI startup specializing in prompt management, large language model (LLM) evaluation, and observability for enterprise customers such as Duolingo and Gusto. Its tools helped enterprises develop, evaluate, and fine-tune AI applications safely and reliably at scale.

    Here is the key points about the acquisition and its strategic impact:

    • Anthropic wants to bolster its enterprise AI offering by integrating Humanloop’s experienced team, who bring valuable expertise in AI tooling, continuous model evaluation, and safety workflows.
    • This acquire-hire strengthens Anthropic’s position against competitors like OpenAI and Google DeepMind by enhancing the performance, safety, reliability, and compliance capabilities of its AI products.
    • Brad Abrams, Anthropic’s API product lead, highlighted that Humanloop’s proven experience will be invaluable in advancing Anthropic’s work in AI safety and building useful AI systems.
    • Humanloop’s platform service is being shut down as of September 2025, and customers are encouraged to migrate to other solutions.
    • The acquisition underscores the fierce competition for AI talent, with Anthropic prioritizing bringing in top talent to build enterprise-ready AI safety and evaluation tools.

    Overall, this strategic hiring move allows Anthropic to significantly boost its enterprise AI tooling ecosystem, enabling it to operationalize AI safety and compete more effectively in the growing market for enterprise AI applications.

  • OpenAI and Sam Altman (Merge Labs) are reportedly creating a startup rival to Elon Musk’s Neuralink

    Merge Labs is a new startup co-founded by Sam Altman, the CEO of OpenAI, that aims to develop brain-computer interface (BCI) technology. This venture is positioned to directly compete with Elon Musk’s Neuralink and other companies like Precision Neuroscience and Synchron, which are working on similar brain interface technologies.

    Here is the key points about Merge Labs:

    • Merge Labs will use artificial intelligence to develop brain implants allowing direct communication between human brains and computers.
    • The name “Merge Labs” refers to a concept Altman introduced in 2017 called “the merge,” describing the merging of human brains and computers.
    • The startup is expected to be valued at around $850 million and will raise a significant portion of its funding from OpenAI’s venture team.
    • Sam Altman is co-founding the company with Alex Blania (co-founder of Worldcoin), though Altman will not personally invest capital.
    • The goal is to create high-bandwidth brain-computer interfaces that could allow people to control computers with their thoughts and potentially lead to a seamless integration of human cognition with AI.
    • This initiative intensifies the competition between Altman and Musk, who previously had ties through OpenAI but have since diverged with competing visions, including Musk’s own company, xAI.
    • Neuralink has already progressed to human trials for quadriplegic patients, while Merge Labs is in the early stages focused on raising funds and assembling a team.

    In summary, Merge Labs represents OpenAI’s strategic move to enter the brain-machine interface market, advancing technologies that connect human brains to digital systems, directly challenging Musk’s Neuralink.

  • OpenAI Introduces Basis: A New Approach to Aligning AI Systems with Human Intent

    OpenAI has unveiled Basis, a novel framework designed to improve how AI systems understand and align with human goals and values. This initiative represents a significant step forward in addressing one of AI’s most persistent challenges: ensuring that advanced models behave in ways that are beneficial, predictable, and aligned with what users actually want.

    The Challenge of AI Alignment : AI alignment refers to the difficulty of making sure AI systems pursue the objectives their designers intend, without unintended consequences. As models grow more powerful, traditional alignment methods—like reinforcement learning from human feedback (RLHF)—face limitations. Basis seeks to overcome these by creating a more robust, scalable foundation for alignment.

    How Basis Works: Basis introduces several key innovations:

    1. Explicit Representation of Intent
      Unlike previous approaches that infer intent indirectly, Basis structures human preferences in a way that AI can directly reference and reason about. This reduces ambiguity in what the system is supposed to optimize for.
    2. Modular Goal Architecture
      Basis breaks down complex objectives into smaller, verifiable components. This modularity makes it easier to debug and adjust an AI’s behavior without retraining the entire system.
    3. Iterative Refinement via Debate
      The framework incorporates techniques where multiple AI instances “debate” the best interpretation of human intent, surfacing edge cases and improving alignment through structured discussion.
    4. Human-in-the-Loop Oversight
      Basis maintains continuous feedback mechanisms where humans can correct misunderstandings at multiple levels of the system’s decision-making process.

    Applications and Benefits: The Basis framework enables:

    • More reliable AI assistants that better understand nuanced requests
    • Safer deployment of autonomous systems by making their decision-making more transparent
    • Improved customization for individual users’ needs and preferences
    • Better handling of complex, multi-step tasks without goal misgeneralization

    Technical Implementation: OpenAI implemented Basis by:

    • Developing new training paradigms that separate intent specification from policy learning
    • Creating verification tools to check alignment at different abstraction levels
    • Building infrastructure to efficiently incorporate human feedback during operation

    Early testing shows Basis-equipped systems demonstrate:

    • 40% fewer alignment failures on complex tasks
    • 3x faster correction of misaligned behaviors
    • Better preservation of intended behavior even as models scale

    Future Directions: OpenAI plans to:

    1. Expand Basis to handle multi-agent scenarios
    2. Develop more sophisticated intent representation languages
    3. Create tools for non-experts to specify and adjust AI goals
    4. Integrate Basis approaches into larger-scale models

    Broader Implications: The introduction of Basis represents a philosophical shift in AI development:

    • Moves beyond “black box” alignment approaches
    • Provides a structured way to talk about and improve alignment
    • Creates foundations for more auditable AI systems
    • Could enable safer development of artificial general intelligence

    Availability and Next Steps : While initially deployed in OpenAI’s research environment, the company plans to gradually incorporate Basis techniques into its product offerings. Researchers can access preliminary documentation and experimental implementations through OpenAI’s partnership program. Basis marks an important evolution in AI alignment methodology. By providing a more systematic way to encode, verify, and refine human intent in AI systems, OpenAI aims to create models that are not just more powerful but more trustworthy and controllable. This work could prove crucial as AI systems take on increasingly complex roles in society.

  • Claude Sonnet 4 now supports 1M tokens of context

    Anthropic Introduces 1 Million Token Context Window, Revolutionizing Long-Context AI

    Anthropic has announced a groundbreaking advancement in AI capabilities: a 1 million token context window for its Claude models. This milestone dramatically expands the amount of information AI can process in a single interaction, enabling deeper analysis of lengthy documents, complex research, and extended conversations without losing coherence.

    Why a 1M Context Window Matters : Most AI models, including previous versions of Claude, have context limits ranging from 8K to 200K tokens—enough for essays or short books but insufficient for large-scale data analysis. The 1 million token breakthrough (equivalent to ~700,000 words or multiple lengthy novels) unlocks new possibilities:

    • Analyzing entire codebases in one go for software development.
    • Processing lengthy legal/financial documents without splitting them.
    • Maintaining coherent, long-term conversations with AI assistants.
    • Reviewing scientific papers, technical manuals, or entire book series seamlessly.

    Technical Achievements Behind the Breakthrough: Scaling context length is not just about adding memory—it requires overcoming computational complexity, memory management, and attention mechanism challenges. Anthropic’s innovations include:

    1. Efficient Attention Mechanisms – Optimized algorithms reduce the quadratic cost of long sequences.
    2. Memory Management – Smarter caching and retrieval prevent performance degradation.
    3. Training Stability – New techniques ensure the model remains accurate over extended contexts.

    Real-World Applications: The 1M context window enables transformative use cases:

    • Legal & Compliance: Lawyers can upload entire case histories for instant analysis.
    • Academic Research: Scientists can cross-reference hundreds of papers in one query.
    • Enterprise Data: Businesses can analyze years of reports, contracts, and emails in a single session.
    • Creative Writing & Editing: Authors can refine full manuscripts with AI feedback.

    Performance & Accuracy: Unlike earlier models that struggled with “lost-in-the-middle” issues (forgetting mid-context information), Claude’s extended memory maintains strong recall and reasoning across the full 1M tokens. Benchmarks show improved performance in:

    • Needle-in-a-haystack tests (retrieving small details from massive texts).
    • Summarization of long documents with high fidelity.
    • Multi-document question answering without fragmentation.

    Future Implications : This advancement pushes AI closer to human-like comprehension of vast information. Potential next steps include:

    • Multi-modal long-context (integrating images, tables, and text).
    • Real-time continuous learning for persistent AI memory.
    • Specialized industry models for medicine, law, and engineering.

    Availability & Access : The 1M token feature is rolling out to Claude Pro and Team users, with enterprise solutions for large-scale deployments. Anthropic emphasizes responsible scaling, ensuring safety and reliability even with expanded capabilities.

    Anthropic’s 1 million token context window marks a quantum leap in AI’s ability to process and reason over large datasets. By breaking the context barrier, Claude unlocks new efficiencies in research, business, and creativity—setting a new standard for what AI can achieve.