• NVIDIA Unleashes Major GeForce NOW and RTX Updates Ahead of Gamescom 2025

    Nvidia has announced a major upgrade for its GeForce Now cloud gaming platform, bringing the next-generation Blackwell architecture to the service. This upgrade delivers RTX 5080-class GPU performance in the cloud, enabling stunning graphics and high frame rates that were previously only possible on high-end gaming PCs.

    Here is the key highlights of the Nvidia Blackwell RTX integration into GeForce Now include:

    • RTX 5080-class performance with 62 teraflops compute and 48GB frame buffer.
    • Support for streaming games up to 5K resolution at 120 frames per second using DLSS 4 Multi-Frame Generation.
    • At 1080p, streaming can reach up to 360 frames per second with Nvidia Reflex technology, delivering response times as low as 30 milliseconds.
    • A new Cinematic Quality Streaming mode that improves color accuracy and visual fidelity.
    • The GeForce Now game library will more than double to over 4,500 titles with the new “Install-to-Play” feature, allowing users to add games directly to the cloud for faster access.
    • Expanded hardware support, including up to 90fps on the Steam Deck and 4K at 120Hz on supported LG TVs.
    • Partnerships with Discord and Epic Games for integrated social gaming experiences.
    • Network improvements with AV1 encoding and collaborations with Comcast and Deutsche Telekom for better low-latency streaming.

    The upgrade will roll out starting September 2025 and is included at no extra cost for GeForce Now Ultimate subscribers, maintaining the current $19.99 monthly price.

    This update represents the biggest leap in cloud gaming for GeForce Now, turning any compatible device into a high-end gaming rig with negligible latency and cinematic graphics powered by Nvidia’s latest Blackwell GPU technology.

  • Nano-Banana: The New Image Generative AI that shaping the future of AI-driven image generation and editing by Google

    Nano Banana is an advanced and mysterious AI image generation and editing model that recently appeared on platforms like lmarena.ai, creating a buzz in the AI and image-editing community. It is widely believed to have ties to Google, either as a prototype or a next-generation model potentially related to Google’s Imagen or Gemini projects, though no official confirmation has been made.

    Here is the key features of Nano Banana include:

    • Exceptional text-to-image generation and natural language-based image editing capabilities.
    • Superior prompt understanding allowing multi-step and complex edits with high accuracy.
    • Consistent scene and character editing that maintains background, lighting, and camera coherence.
    • Very high fidelity in character consistency across different images.
    • Fast on-device processing making it suitable for real-time applications.
    • Versatility in producing photorealistic and stylistic outputs.
    • Free access for users on platforms like lmarena.ai through image editing battles.

    Users have praised Nano Banana for its impressive and consistent editing results, especially its ability to fill incomplete faces with realistic details and produce surreal yet accurate layered designs. It has been described as a potential “Photoshop killer” due to its ease of use and precision.

    While it excels in many areas, it occasionally shows limitations common to AI image models, such as minor visual glitches, anatomical errors, and text rendering issues.

    Nano Banana stands out against other AI image models with its context-aware edits, blending precision, and speed. It has attracted significant attention for its innovative approach and potential integration with future Google AI devices.

    For those interested in experimenting, Nano Banana can be encountered randomly in AI image editing battles on lmarena.ai, where users generate images by submitting prompts and selecting preferred outputs.

    This model is shaping the future of AI-driven image generation and editing, with a community eagerly following its developments and capabilities.

  • Google’s Gemma 3 270M: The compact model for hyper-efficient AI

    Gemma 3 270M embodies this “right tool for the job” philosophy. It’s a high-quality foundation model that follows instructions well out of the box, and its true power is unlocked through fine-tuning. Once specialized, it can execute tasks like text classification and data extraction with remarkable accuracy, speed, and cost-effectiveness. By starting with a compact, capable model, you can build production systems that are lean, fast, and dramatically cheaper to operate.Gemma 3 270M is designed to let developers take this approach even further, unlocking even greater efficiency for well-defined tasks. It’s the perfect starting point for creating a fleet of small, specialized models, each an expert at its own task.

    Here is the core capabilities of Gemma 3 270M:

    • Compact and capable architecture: Our new model has a total of 270 million parameters: 170 million embedding parameters due to a large vocabulary size and 100 million for our transformer blocks. Thanks to the large vocabulary of 256k tokens, the model can handle specific and rare tokens, making it a strong base model to be further fine-tuned in specific domains and languages.
    • Extreme energy efficiency: A key advantage of Gemma 3 270M is its low power consumption. Internal tests on a Pixel 9 Pro SoC show the INT4-quantized model used just 0.75% of the battery for 25 conversations, making it our most power-efficient Gemma model.
    • Instruction following: An instruction-tuned model is released alongside a pre-trained checkpoint. While this model is not designed for complex conversational use cases, it’s a strong model that follows general instructions right out of the box.
    • Production-ready quantization: Quantization-Aware Trained (QAT) checkpoints are available, enabling you to run the models at INT4 precision with minimal performance degradation, which is essential for deploying on resource-constrained devices.
  • “Learning” mode for Claude Code (designed to help users learn coding interactively while collaborating with Claude)

    Anthropic has launched a new “Learning” mode for Claude Code. This mode is designed to help users learn coding interactively while collaborating with Claude. In this mode, Claude occasionally pauses and marks sections with a “#TODO” comment, prompting users to write code themselves, essentially acting like a coding mentor.

    There is also an “Explanatory” mode where Claude explains its reasoning process as it codes, helping users understand architectural choices, trade-offs, and best practices.

    This Learning mode was initially available only for Claude for Education users but is now accessible to all Claude.ai users through a new option in the style dropdown menu. The feature aims to promote deeper understanding and independent thinking for coders at different levels.

    Anthropic is also opening the ability for developers to create custom learning modes using Claude Code’s new Output Styles feature, allowing even more personalized learning experiences.

    This launch marks a step towards making AI a collaborative partner in coding education and practice rather than just a tool for getting direct answers.

  • Imagen 4, Google’s latest text-to-image AI model, is now generally available through the Gemini API, Google AI Studio, and integration with Google Workspace apps

    Imagen 4, Google’s latest text-to-image AI model, is now generally available. It offers significant improvements in image quality, particularly in rendering text accurately within images. Google has introduced three model variants for different use cases and pricing:

    • Imagen 4 Fast: For rapid image generation at $0.02 per image.
    • Imagen 4 Standard: The flagship model designed for most use cases, priced at $0.04 per image.
    • Imagen 4 Ultra: For highly precise image generation closely aligned with text prompts, priced at $0.06 per image.

    Imagen 4 is accessible through the Gemini API, Google AI Studio, and integration with Google Workspace apps like Slides, Docs, and Vids. It delivers faster performance and higher fidelity images compared to its predecessor Imagen 3, supporting photorealistic and abstract styles with detailed textures, accurate lighting, and clean text rendering.

    This availability enables developers, creators, and businesses to use Imagen 4 across a broad range of creative projects, from marketing designs to game art and editorial content.

    Additionally, Google uses non-visible SynthID watermarks in all images generated with Imagen 4 for traceability and responsible AI use.

    In short, Imagen 4 is fully available now with a new fast generation option, making it both high-quality and efficient for a wide spectrum of image generation needs.

  • Apple readies AI-powered smart home devices like Tabletop product, Intelligent cameras etc.

    Apple is preparing a major AI-driven expansion into smart home devices, signaling a significant push fueled by artificial intelligence to compete with Amazon, Google, and other established players.

    Here is the key elements of this plan include:

    • A tabletop robot codenamed J595, expected around 2027. This robot features an iPad-sized display mounted on a motorized arm capable of swiveling to follow users throughout a room. It acts as a virtual companion with an AI-driven personality, enhanced video call capabilities, and sophisticated Siri interaction powered by large language models for conversational and proactive assistance. This represents Apple’s ambitious entry into home robotics.

    • A smart display device codenamed J490, slated for release by mid-2026. It looks like a square iPad screen mounted on a wall and runs a new multi-user operating system called “Charismatic.” Unlike traditional Siri, this device expects users to interact mostly through voice commands with an upgraded Siri that offers personalized experiences using facial recognition and spatial sensing of household members.

    • A lineup of battery-powered home security cameras codenamed J450 with facial recognition and infrared sensors. These cameras automate household functions (e.g., turning lights on/off, playing personalized music) and compete directly with Amazon’s Ring and Google’s Nest. The cameras integrate tightly with Apple’s Home app and iCloud+ for secure video storage.

    • A revamped Siri assistant, internally codenamed Linwood, built from the ground up with large language model technology. This new Siri aims to transcend simple command-based interaction to become a conversational, proactive AI companion capable of handling complex, multi-step tasks across Apple’s ecosystem. It will power all these new smart home devices, enabling deeper automation and richer user experiences.

    • These devices and software are part of Apple’s strategic effort to diversify its business beyond traditional mobile devices, enhance its AI capabilities, and build a cohesive, AI-powered smart home ecosystem that leverages Apple’s strengths in privacy, hardware-software integration, and user experience design.

    • Apple is expected to showcase progress on this smart home ecosystem soon, with the smart display device potentially launching later this year (2025), and the more advanced robot arriving in 2027.

    This ambitious push is partly driven by critiques that Apple has lagged in generative AI and home automation compared to competitors, and it comes as the company seeks new growth avenues after slower innovation cycles in core products like iPhones and mixed reactions to other products like the Vision Pro headset.

    In summary, Apple’s AI-powered smart home devices include:

    • A smart wall-mounted display (J490) with a new OS focusing on multi-user voice interaction.
    • A robotic tabletop companion (J595) with a swiveling screen and advanced AI.
    • Intelligent battery-powered security cameras (J450) with automation capabilities.
    • A fundamentally redesigned, conversational Siri assistant (Linwood).

    Together, they form a holistic, AI-centered smart home vision targeting releases between late 2025 and 2027, elevating Apple’s footprint in home automation and AI-driven interactions.

  • Oracle partners with Google Cloud to offer Gemini AI models,a strategic partnership

    Oracle and Google Cloud have announced a strategic partnership to offer Google’s Gemini AI models directly to Oracle customers through Oracle Cloud Infrastructure (OCI).

    Here is the key points of this collaboration include:

    • The integration will enable Oracle Cloud users and enterprise application customers to access Gemini’s advanced AI capabilities in text, image, video, and audio generation as part of Oracle’s Generative AI service.
    • Customers can utilize these AI tools using their existing Oracle cloud credits, streamlining procurement and billing.
    • This partnership expands upon previous multicloud efforts between Oracle and Google Cloud, such as running Oracle Database services in Google data centers and enabling low-latency cross-cloud transfers.
    • Oracle will provide a selection of AI model choices from multiple providers, including Gemini models, to avoid dependency on any single proprietary technology.
    • Google Cloud benefits by deepening its reach into the enterprise market, competing with other cloud vendors by embedding generative AI into critical business systems.
    • Initial offerings will start with Gemini 2.5 models, extendable to the full Gemini range, including capabilities for advanced coding, productivity automation, research, and multimodal understanding.
    • Future plans include integrating Gemini AI models into Oracle Fusion Cloud Applications for finance, HR, supply chain, sales, service, and marketing workflows.
    • Oracle customers will be able to build AI-powered agents and benefit from Google Gemini’s features like up-to-date grounding in Google Search data, large context windows, robust encryption, and enterprise-grade security.
    • This move aligns with a trend where enterprises seek multicloud approaches to deploy best-of-breed AI technologies and improve mission-critical systems.

    Top Oracle officials highlight that the partnership emphasizes providing powerful, secure, and cost-effective AI solutions to accelerate innovation while keeping AI technologies close to customer data with a focus on security and scalability. Google Cloud CEO Thomas Kurian remarked that Gemini models will support a wide range of enterprise use cases and make AI deployment easier for Oracle users.

    Overall, this alliance positions Oracle as a significant distributor of Google’s Gemini AI technology in enterprise cloud environments while enhancing Google Cloud’s footprint in AI-powered enterprise solutions.

  • Trump administration weighs government stake in Intel

    The Trump administration is currently in talks with Intel about the U.S. government potentially taking an equity stake in the semiconductor company. These discussions follow a recent meeting between President Trump and Intel CEO Lip-Bu Tan. The potential government investment aims to support Intel’s effort to expand its domestic manufacturing, specifically focusing on its much-delayed manufacturing hub in Ohio, which Intel had intended to become the world’s largest chipmaking facility, but has faced multiple delays.

    The exact size of the government’s stake is not clear, and the plans remain fluid as negotiations continue. The proposed government investment would involve direct financial support to Intel, which is facing challenges including delays, financial difficulties, and increased competition. After news of these talks, Intel’s stock surged approximately 7%.

    This move represents an unusual and significant direct government intervention in private enterprise, reflecting broader strategic goals to strengthen domestic semiconductor manufacturing amid geopolitical concerns and supply chain vulnerabilities. Intel is seen as a key player because it is the only major U.S. company capable of producing advanced chips domestically, a critical factor in reducing reliance on foreign suppliers.

    Despite previous tensions, including President Trump’s public calls for CEO Tan’s resignation due to alleged conflicts of interest involving Chinese investments, the meeting and subsequent talks signal a renewed collaboration effort aimed at bolstering the U.S. semiconductor industry.

  • AI start-up Perplexity makes $34.5bn bid for Google Chrome

    Perplexity AI, an artificial intelligence startup valued at about $18 billion as of mid-2025, made a surprise unsolicited all-cash offer of $34.5 billion to acquire Google’s Chrome web browser. This bid is nearly double Perplexity’s own valuation.

    Here is the key points about this acquisition bid:

    • The offer aims to buy Google’s Chrome browser amid a pending U.S. antitrust ruling that could force Google to divest Chrome as part of remedies against Google’s monopoly in search.
    • Perplexity pledged to keep the Chromium engine that Chrome is based on open source and promised to invest around $3 billion in its continued development.
    • Despite acquiring Chrome, Perplexity said it would keep Google as the default search engine on Chrome rather than replacing it with its own AI-powered search.
    • The move is also seen as a strategy by Perplexity to gain immediate access to billions of users and vast behavioral data to compete more effectively in AI-driven search and browsing markets.
    • Google has responded by calling the proposed sale “wildly overbroad” and asserted it would hurt consumers and security, planning to contest any such forced divestiture.
    • Perplexity recently launched its own AI-powered web browser called Comet that integrates an AI assistant more deeply than Chrome’s current AI features.
    • The U.S. Department of Justice has pushed for Google to divest Chrome as part of its antitrust resolution, as Chrome is viewed as a critical search access point reinforcing Google’s dominance.
    • Several investors have committed to backing Perplexity’s ambitious offer, although financing details remain private.
    • Industry analysts consider Google unlikely to sell Chrome voluntarily, expecting a lengthy legal battle over this issue.

    Perplexity’s bold $34.5 billion bid to acquire Google Chrome represents a major challenge to Google’s dominance in web browsing and search amid growing regulatory pressure. The deal would dramatically accelerate Perplexity’s growth and influence by granting it control over the world’s most popular browser and its vast user base—if it succeeds despite Google’s resistance and the complex legal environment.

  • Anthropic hires HumanLoop execs in deal to boost enterprise AI offerings

    Anthropic has acquired Humanloop’s three co-founders—CEO Raza Habib, CTO Peter Hayes, and CPO Jordan Burgess—along with most of its engineering and research team in an acqui-hire deal aimed at strengthening Anthropic’s enterprise AI tooling and safety capabilities. This move does not include acquiring Humanloop’s assets or intellectual property, which is increasingly irrelevant in AI where critical IP often resides in the expertise of the people themselves.

    Humanloop was a UK-based AI startup specializing in prompt management, large language model (LLM) evaluation, and observability for enterprise customers such as Duolingo and Gusto. Its tools helped enterprises develop, evaluate, and fine-tune AI applications safely and reliably at scale.

    Here is the key points about the acquisition and its strategic impact:

    • Anthropic wants to bolster its enterprise AI offering by integrating Humanloop’s experienced team, who bring valuable expertise in AI tooling, continuous model evaluation, and safety workflows.
    • This acquire-hire strengthens Anthropic’s position against competitors like OpenAI and Google DeepMind by enhancing the performance, safety, reliability, and compliance capabilities of its AI products.
    • Brad Abrams, Anthropic’s API product lead, highlighted that Humanloop’s proven experience will be invaluable in advancing Anthropic’s work in AI safety and building useful AI systems.
    • Humanloop’s platform service is being shut down as of September 2025, and customers are encouraged to migrate to other solutions.
    • The acquisition underscores the fierce competition for AI talent, with Anthropic prioritizing bringing in top talent to build enterprise-ready AI safety and evaluation tools.

    Overall, this strategic hiring move allows Anthropic to significantly boost its enterprise AI tooling ecosystem, enabling it to operationalize AI safety and compete more effectively in the growing market for enterprise AI applications.