Category: News

  • Bring AI to your formulas with the COPILOT function in Microsoft Excel

    Microsoft Excel’s new Copilot feature is an AI-powered tool that helps users work faster and smarter by integrating advanced generative AI directly into Excel’s interface and formulas. Here are key points about the Excel Copilot feature:

    • The Copilot function allows users to enter natural language commands in spreadsheet cells to perform tasks like categorizing data, summarizing feedback, generating tables, brainstorming ideas, and analyzing data trends.
    • Users can invoke the feature using the formula syntax: =COPILOT(prompt, optional data references), enabling AI-powered results that update dynamically as the data changes.
    • It works alongside existing Excel functions (e.g., IF, SWITCH, LAMBDA) and integrates into Excel’s calculation engine.
    • Copilot can help import data from web sources, OneDrive, SharePoint, or organizational communications.
    • It can highlight, filter, and sort data, create and explain formulas, and generate charts, pivot tables, and data insights like trends and outliers.
    • The AI only works with data within the spreadsheet and cannot access external data sources unless imported explicitly.
    • Excel Copilot is accessible via an icon in the ribbon or a sparkle icon in a cell, opening a chat interface for easy prompting.
    • It requires files to be saved in Microsoft cloud services like OneDrive or SharePoint to function.
    • Microsoft assures users that data processed by Copilot is never used to train AI models, maintaining user confidentiality.
    • The feature is currently rolling out to Windows and Mac users on the Beta Channel with a Microsoft 365 Copilot license, with web versions forthcoming.
    • Microsoft warns against relying on Copilot in “high-stakes” scenarios due to potential inaccuracies and limits its use to 100 functions every 10 minutes.

    Microsoft Excel Copilot aims to make complex data tasks simpler by allowing users to interact with their data using natural language and AI-driven assistance directly within their spreadsheets.

  • Intel’s new Arc graphics driver lets you dedicate up to 87% of laptop memory capacity to the iGPU for VRAM

    A new graphics driver from Intel introduces a groundbreaking feature for laptops equipped with its latest Core Ultra (“Meteor Lake”) processors: the ability to manually override and drastically increase the amount of system RAM allocated to the integrated GPU (iGPU) as dedicated video memory (VRAM).

    Traditionally, operating systems like Windows dynamically manage how much system memory is shared with the integrated graphics. This “shared GPU memory” is usually capped at half of the total system RAM. For a common laptop with 16GB of RAM, this means the iGPU might only access a maximum of 8GB for graphics tasks, even if more would be beneficial.

    Intel’s new driver, version 31.0.101.5370 beta (and later), shatters this limitation. It adds a manual override slider within the Intel Graphics Command Center, allowing users to directly specify how much system memory is dedicated to the GPU. The most striking aspect is the upper limit: users can allocate up to 87% of their total system memory to the iGPU. On a 16GB laptop, this translates to a potential allocation of approximately 14GB of VRAM, a massive increase over the previous default.

    This feature is targeted squarely at enhancing performance for memory-intensive workloads. The primary beneficiaries are:

    1. Gamers: Modern games, especially those with high-resolution textures, are increasingly VRAM-hungry. An insufficient VRAM buffer can cause significant stuttering, frame rate drops, and lower texture quality. By allowing a much larger VRAM pool, this feature can prevent these bottlenecks, leading to a smoother gaming experience on Intel Arc-powered Ultra laptops.
    2. Content Creators: Applications for video editing, 3D rendering, and AI processing heavily utilize GPU memory. A larger dedicated VRAM allocation can significantly speed up rendering times and allow for work on more complex projects that were previously hampered by memory constraints.

    However, the article highlights crucial caveats. This memory is not additional; it is reallocated from the same pool used by the CPU and the rest of the system. Allocating 14GB of 16GB to the GPU leaves only 2GB for Windows and other applications, which would cripple overall system performance. Therefore, this tool is not a “set and forget” solution but a powerful option for advanced users to optimize their system for specific tasks. It is most effectively used on systems with a larger amount of RAM (e.g., 32GB or more) where a significant portion can be dedicated to the GPU without starving the operating system.

    Intel’s new shared GPU memory override is a significant and welcome advancement for on-the-go gaming and content creation. It provides users with unprecedented control over their hardware resources, empowering them to tailor their laptop’s performance to their immediate needs, ultimately extracting more potential from the integrated Intel Arc graphics within Core Ultra processors.

  • Acrobat Studio Delivers New AI-Powered Home for Productivity and Creativity with PDF Spaces, Express Creation Tools, AI Agents

    Adobe Studio product is the launch of Acrobat Studio on August 19, 2025. Acrobat Studio is a transformative platform that unites Adobe Acrobat, Adobe Express, and AI agents into a single productivity and creativity home. It turns PDFs and other file collections into dynamic conversational knowledge hubs called PDF Spaces, where customizable AI Assistants help users uncover insights, generate ideas, and collaborate. Acrobat Studio integrates Adobe Express content creation tools with trusted PDF features, enabling seamless creation, editing, sharing, and learning from documents and visual content all in one place.

    Here is the key features of Acrobat Studio include:

    • PDF Spaces that convert file collections into interactive, AI-powered environments.
    • AI Assistants with roles like “instructor,” “analyst,” or “entertainer” to synthesize information and answer questions.
    • Integration of Adobe Express Premium tools and Adobe Firefly-powered image and video generation.
    • Support for scanning, e-signing, editing, combining documents, and AI-powered summarization.
    • Application for business professionals, consumers, students, and travelers to work smarter and faster with AI.

    The product is available as a subscription service at $29.95/month for businesses and $25/month for individuals, also including premium Adobe Express features.

    Acrobat Studio is positioned as a next-level evolution of Adobe Acrobat, originally the inventor of PDF, aiming to enhance productivity and creativity with AI and advanced tools.

    Additionally, Adobe has expanded its GenStudio product with AI-powered innovations for video and display ad campaign creation, enhancing marketing workflows with AI across multiple platforms.

    There have also been monthly security updates for Adobe products released in July 2025 with medium risk mitigations.

    This news showcases Adobe’s strong focus on AI integration into its suite to transform productivity, creativity, and marketing workflows effectively.

  • Google Translate prepares speed vs accuracy modes for translation, the AI model picker feature

    Google is preparing a major AI-powered update for its Translate app, version 9.15.114, which introduces a new AI model picker feature. This will allow users, for the first time, to choose between translation modes optimized for speed or accuracy. The two options available will be:

    • Fast mode: Optimized for speed and efficiency, providing quicker translations using on-device processing.
    • Advanced mode: Powered by Google’s Gemini AI model, focusing on accuracy with more computationally intensive, cloud-based translations.

    Currently, the advanced mode is limited to English-Spanish and English-French language pairs. The model picker will be accessible both on the main screen and on the translation results page.

    This update includes reorganizing the UI for better accessibility, moving buttons for microphone, handwriting, and paste functions to a lower toolbar for easier one-handed use. The update aligns with broader AI trends of letting users balance speed and accuracy according to their needs, improving user control over translations from casual chats to professional documents.

    This evolution in Google Translate builds on its Gemini AI capabilities and addresses user feedback for more reliable and context-aware translations, while also considering privacy by enabling some processing on-device. The rollout date for this feature to all users is not yet announced.

  • Grammarly Launches Specialized AI Agents and Writing Surface for Students and Professionals

    Grammarly has launched eight specialized AI agents designed to transform its writing platform into a more comprehensive productivity tool aimed at both students and professionals. These agents are integrated into “Grammarly Docs,” an AI-native writing surface that provides real-time, context-aware assistance at every stage of the writing process.

    The eight AI agents offer targeted help for a variety of specific writing challenges:

    • Reader Reactions: Predicts how a target audience (professors, managers, clients) might respond, question, or misunderstand the text, suggesting edits accordingly.
    • AI Grader: Provides rubric-based feedback and estimates grades for academic assignments before submission.
    • Citation Finder: Assists in finding credible sources, supports or challenges claims, and auto-formats citations.
    • Expert Review: Offers domain-specific guidance to raise academic or professional quality.
    • Proofreader: Improves clarity and flow with in-line edits while maintaining the writer’s voice.
    • AI Detector: Scores text on whether it appears AI- or human-written to check authenticity.
    • Plagiarism Checker: Compares writing against large databases to detect overlaps and ensure proper attribution.
    • Paraphraser: Rewrites text to match desired tones, audiences, and styles, supporting custom voice.

    These AI agents eliminate the need for complicated prompts by providing focused, intelligent support tailored to the user’s writing goals while preserving authorship and style. The rollout marks the beginning of Grammarly’s evolution into an AI-driven productivity platform, with plans for these agents to function seamlessly across all places where users write and collaborate.

    Grammarly CEO Shishir Mehrotra emphasized the focus on enhancing learning and AI literacy, especially for students entering a job market demanding both subject expertise and AI fluency. The AI grader, for example, can simulate professor-like feedback, allowing students to improve papers before submission.

    This launch makes Grammarly one of the few AI companies with a strong dedication to educational writing tools alongside professional uses. The company’s VP of Product Management, Luke Behnke, highlighted the shift from simple grammar suggestions to intelligent agents that actively help users achieve communication goals.

  • DeepSeek unveils V3.1 AI model with expanded context window, now capable of processing up to 1 million tokens

    DeepSeek has unveiled its V3.1 AI model, which represents a significant advancement over the previous V3 version. The main highlight of V3.1 is its expanded context window, now capable of processing up to 1 million tokens. This allows the AI to handle much larger volumes of information, support longer conversations with improved recall, and deliver more coherent and contextually relevant interactions. The model’s architecture features advanced reasoning capabilities, showing improvements of up to 43% in multi-step reasoning tasks. It supports over 100 languages, including enhanced performance in Asian and low-resource languages, and demonstrates a 38% reduction in hallucinations compared to earlier versions.

    Technically, DeepSeek V3.1 uses a transformer-based architecture with 560 billion parameters, multi-modal capabilities (text, code, image understanding), and optimized inference for faster responses. It employs a mixture-of-experts (MoE) design activating only a subset of parameters per token for efficiency. Training innovations include FP8 mixed precision training and a novel load balancing strategy without auxiliary losses. Efficiency optimizations like memory-efficient attention and a multi-token prediction system improve speed and performance.

    DeepSeek positions V3.1 as suitable for advanced applications such as software development (code generation and debugging), scientific research, education, content creation, and business intelligence. The model is available now for enterprise customers via API and will roll out to Chrome extension users soon. Additionally, a smaller 7-billion parameter version of V3.1 will be released open source to support research and development.

    This announcement marks a significant milestone for DeepSeek, demonstrating a competitive and cost-effective AI solution with expanded context handling and advanced capabilities in reasoning and multilingual support.

  • Google Bets on Nuclear to Power AI Data Centers by 2030,especially small modular nuclear reactor (SMR)

    Google has announced plans to power its AI data centers with nuclear energy by 2030. The company has partnered with Kairos Power to develop small modular nuclear reactors (SMRs), with the first advanced reactor, Hermes 2, set to be operational in Oak Ridge, Tennessee, by 2030. This reactor will deliver up to 50 megawatts of clean, carbon-free electricity to the Tennessee Valley Authority’s (TVA) grid, supporting Google’s data centers in Tennessee and Alabama.

    This is Google’s first-ever corporate power purchase agreement for Generation IV nuclear reactors and represents a significant step toward meeting the rapidly increasing energy demands driven by AI and digital technologies. The broader agreement aims to bring up to 500 megawatts of new nuclear capacity online by 2035 to supply Google’s growing energy needs.

    Google, Kairos Power, and TVA collaborate to share financial risks and operational responsibilities, ensuring that consumers of TVA are not burdened with the initial costs of building this new nuclear infrastructure. This move aligns with Google’s sustainability goals to power its operations with 24/7 carbon-free electricity and supports the broader transition to clean, reliable, and scalable energy sources for AI and cloud computing.

    The Hermes 2 reactor uses advanced liquid salt cooling technology, which offers safety and cost advantages over traditional nuclear plants. This project is part of Google’s ongoing investments in clean energy, including hydropower, geothermal, and fusion energy initiatives.

    Through these efforts, Google aims to assure a sustainable and resilient energy supply that can meet future AI power requirements while supporting U.S. leadership in clean energy innovation.

  • NVIDIA Unleashes Major GeForce NOW and RTX Updates Ahead of Gamescom 2025

    Nvidia has announced a major upgrade for its GeForce Now cloud gaming platform, bringing the next-generation Blackwell architecture to the service. This upgrade delivers RTX 5080-class GPU performance in the cloud, enabling stunning graphics and high frame rates that were previously only possible on high-end gaming PCs.

    Here is the key highlights of the Nvidia Blackwell RTX integration into GeForce Now include:

    • RTX 5080-class performance with 62 teraflops compute and 48GB frame buffer.
    • Support for streaming games up to 5K resolution at 120 frames per second using DLSS 4 Multi-Frame Generation.
    • At 1080p, streaming can reach up to 360 frames per second with Nvidia Reflex technology, delivering response times as low as 30 milliseconds.
    • A new Cinematic Quality Streaming mode that improves color accuracy and visual fidelity.
    • The GeForce Now game library will more than double to over 4,500 titles with the new “Install-to-Play” feature, allowing users to add games directly to the cloud for faster access.
    • Expanded hardware support, including up to 90fps on the Steam Deck and 4K at 120Hz on supported LG TVs.
    • Partnerships with Discord and Epic Games for integrated social gaming experiences.
    • Network improvements with AV1 encoding and collaborations with Comcast and Deutsche Telekom for better low-latency streaming.

    The upgrade will roll out starting September 2025 and is included at no extra cost for GeForce Now Ultimate subscribers, maintaining the current $19.99 monthly price.

    This update represents the biggest leap in cloud gaming for GeForce Now, turning any compatible device into a high-end gaming rig with negligible latency and cinematic graphics powered by Nvidia’s latest Blackwell GPU technology.

  • Nano-Banana: The New Image Generative AI that shaping the future of AI-driven image generation and editing by Google

    Nano Banana is an advanced and mysterious AI image generation and editing model that recently appeared on platforms like lmarena.ai, creating a buzz in the AI and image-editing community. It is widely believed to have ties to Google, either as a prototype or a next-generation model potentially related to Google’s Imagen or Gemini projects, though no official confirmation has been made.

    Here is the key features of Nano Banana include:

    • Exceptional text-to-image generation and natural language-based image editing capabilities.
    • Superior prompt understanding allowing multi-step and complex edits with high accuracy.
    • Consistent scene and character editing that maintains background, lighting, and camera coherence.
    • Very high fidelity in character consistency across different images.
    • Fast on-device processing making it suitable for real-time applications.
    • Versatility in producing photorealistic and stylistic outputs.
    • Free access for users on platforms like lmarena.ai through image editing battles.

    Users have praised Nano Banana for its impressive and consistent editing results, especially its ability to fill incomplete faces with realistic details and produce surreal yet accurate layered designs. It has been described as a potential “Photoshop killer” due to its ease of use and precision.

    While it excels in many areas, it occasionally shows limitations common to AI image models, such as minor visual glitches, anatomical errors, and text rendering issues.

    Nano Banana stands out against other AI image models with its context-aware edits, blending precision, and speed. It has attracted significant attention for its innovative approach and potential integration with future Google AI devices.

    For those interested in experimenting, Nano Banana can be encountered randomly in AI image editing battles on lmarena.ai, where users generate images by submitting prompts and selecting preferred outputs.

    This model is shaping the future of AI-driven image generation and editing, with a community eagerly following its developments and capabilities.

  • Google’s Gemma 3 270M: The compact model for hyper-efficient AI

    Gemma 3 270M embodies this “right tool for the job” philosophy. It’s a high-quality foundation model that follows instructions well out of the box, and its true power is unlocked through fine-tuning. Once specialized, it can execute tasks like text classification and data extraction with remarkable accuracy, speed, and cost-effectiveness. By starting with a compact, capable model, you can build production systems that are lean, fast, and dramatically cheaper to operate.Gemma 3 270M is designed to let developers take this approach even further, unlocking even greater efficiency for well-defined tasks. It’s the perfect starting point for creating a fleet of small, specialized models, each an expert at its own task.

    Here is the core capabilities of Gemma 3 270M:

    • Compact and capable architecture: Our new model has a total of 270 million parameters: 170 million embedding parameters due to a large vocabulary size and 100 million for our transformer blocks. Thanks to the large vocabulary of 256k tokens, the model can handle specific and rare tokens, making it a strong base model to be further fine-tuned in specific domains and languages.
    • Extreme energy efficiency: A key advantage of Gemma 3 270M is its low power consumption. Internal tests on a Pixel 9 Pro SoC show the INT4-quantized model used just 0.75% of the battery for 25 conversations, making it our most power-efficient Gemma model.
    • Instruction following: An instruction-tuned model is released alongside a pre-trained checkpoint. While this model is not designed for complex conversational use cases, it’s a strong model that follows general instructions right out of the box.
    • Production-ready quantization: Quantization-Aware Trained (QAT) checkpoints are available, enabling you to run the models at INT4 precision with minimal performance degradation, which is essential for deploying on resource-constrained devices.