Category: News

  • Meta Launches Threads Communities: A Direct Shot at X’s Heart

    In a escalating showdown between social media titans, Meta has rolled out Threads Communities on October 2, 2025, a feature explicitly crafted to carve out niche havens amid the conversational chaos of Elon Musk’s X. With Threads boasting over 400 million monthly active users—doubling in the past year—this global beta introduces over 100 topic-specific groups, from NBA/WNBA enthusiasts to K-pop superfans and book lovers, aiming to foster deeper, more meaningful exchanges than the algorithm-driven frenzy on X.

    At its essence, Communities transforms Threads’ existing topic tags and custom feeds into dedicated, searchable spaces accessible via a new app tab. Users can join publicly without approval, post threaded discussions, and cross-share to their main feed, all while enforcing customizable rules and moderation tools. Each group sports a unique “Like” emoji—think a stack of books for literary chats or a basketball for sports debates—adding a playful touch that signals belonging right on your profile. Discoverability is seamless: search for interests or spot the three-dot icon on tags in your feed to dive in.

    This isn’t mere mimicry; it’s a calculated pivot. While X’s communities often devolve into viral echo chambers plagued by misinformation, Threads emphasizes curated, relevant threads to prioritize sustained dialogue over fleeting trends. Meta’s approach mirrors early Twitter’s organic evolution, formalizing user-driven topic tags into structured hubs that could outshine X’s by reflecting authentic behaviors. As Threads closes in on X’s mobile daily actives, this launch underscores Meta’s strategy to retain creators weary of X’s volatility.

    Early stats paint a promising picture: integrations with Instagram and the Fediverse have supercharged growth, with sports and tech groups already buzzing. On X itself, reactions range from promotional hype—”Threads Communities offer public, casual spaces to discuss niche interests”—to cautious optimism, like one user noting it “builds belonging across your interests.” Yet, skeptics voice concerns over moderation pitfalls and feature bloat, fearing it dilutes Threads’ fresh Twitter-alternative vibe. One post quipped, “Copying Twitter? At least copy something useful,” highlighting the tightrope Meta walks in emulating without alienating.

    Looking forward, Threads eyes badges for top contributors, enhanced ranking in feeds to surface quality content, and monetization via sponsored groups or premium tools—potentially evolving into virtual events or e-commerce nooks by 2026. As Zuckerberg’s platform inches toward overtaking Musk’s in engagement, Communities could tip the scales, luring users craving connection over controversy.

    In the ever-shifting social landscape, Threads Communities isn’t just a feature—it’s Meta’s bold bid to weave unbreakable threads of community, challenging X to rethink its turf.

  • Google launches Jules Tools to challenge GitHub Copilot

    In a move that’s sending ripples through the developer community, Google has unveiled Jules Tools, its latest AI-powered suite designed to upend the dominance of GitHub Copilot in the coding assistance arena. Announced on October 2, 2025, this launch marks a significant escalation in the AI coding wars, positioning Google’s asynchronous coding agent as a more autonomous, workflow-integrated alternative. With developers spending billions on productivity tools, Jules Tools arrives at a pivotal moment, promising to streamline code generation, debugging, and testing without the constant babysitting required by rivals.

    At its core, Jules Tools builds on the foundation of Google’s Jules AI coding agent, first introduced in public beta back in May 2025. Unlike Copilot, which excels at real-time code completions within IDEs like VS Code, Jules operates asynchronously—cloning repositories into a secure Google Cloud environment to handle “random tasks” in the background, such as writing unit tests or refactoring buggy code. The new Tools package introduces a sleek command-line interface (CLI) and public API, making it a seamless companion for developers’ toolchains. Installation is a breeze via npm, transforming Jules from a dashboard-like overseer into a hands-on command surface.

    What sets Jules Tools apart is its emphasis on autonomy and integration. Powered by advanced Gemini models, it doesn’t just suggest code snippets; it executes them independently, allowing coders to focus on high-level architecture while Jules tackles the drudgery. Early benchmarks reveal Jules outperforming Copilot in time savings—up to 40% more efficient for complex tasks like bug fixes and test generation. The octopus mascot, a quirky nod to multi-tasking tentacles, adds a layer of personality, with users on Reddit hailing it as “Google’s sassy answer to Codex.”

    This launch isn’t without controversy. Critics argue that Jules’ cloud-cloning approach raises privacy concerns, as it requires uploading code to Google’s servers—unlike Copilot’s more localized processing. However, Google counters with robust encryption and opt-in controls, emphasizing enterprise-grade security. For individual devs, Jules is free with generous limits, democratizing access in a market where Copilot’s premium tiers can sting.

    The timing couldn’t be better. As AI coding tools evolve, GitHub’s Copilot—now a Microsoft darling—faces scrutiny over hallucinated code and dependency risks. Jules Tools, with its async prowess, could lure teams seeking less interruption and more intelligence. TechCrunch reports heated competition, with Jules already integrating into popular CI/CD pipelines.

    Looking ahead, Google’s push signals a broader AI arms race. Will Jules dethrone Copilot, or will it become another tool in the crowded shed? Developers, fire up your terminals— the future of coding just got a lot more tentacles.

  • Google DeepMind Unveils AI Design Tool in Collaboration with industrial designer Ross Lovegrove

    Google DeepMind announced a groundbreaking collaboration with renowned industrial designer Ross Lovegrove and his studio, alongside design office Modem, to launch an AI-powered design tool that bridges human creativity and generative technology. This bespoke system, built on Gemini multimodal AI and DeepMind’s Imagen text-to-image model, transforms sketches into iterative prototypes, marking a shift from AI as mere generator to active creative partner. The project, detailed in a DeepMind blog post, challenges traditional design workflows by fine-tuning models on personal artistic data, enabling unprecedented personalization in industrial design.

    At the heart of the tool is a human-AI dialogue loop. Lovegrove’s team curated a dataset of his hand-drawn sketches—characterized by organic, biomorphic forms inspired by nature—to train Imagen, distilling his signature “design language” of fluid lines and lightweight structures. Rather than generic prompts, designers used precise, evocative descriptors like “lightweight skeletal form” or “biomorphic lattice,” avoiding the word “chair” to evade clichés and spark novel iterations. This linguistic precision, honed through trial-and-error, allowed the AI to riff on concepts, producing diverse visuals that aligned with Lovegrove’s vision. Gemini then expanded these into material explorations—envisioning titanium lattices or ethereal composites—while multi-view generations aided spatial reasoning. The process emphasized iteration: outputs fed back into prompts, fostering a “conversation” where AI amplified, rather than dictated, human intent.

    The focal challenge? Designing a chair—a deceptively simple object blending utility and aesthetics. Starting from digital sketches, the tool generated hundreds of variations, from skeletal exoskeletons to flowing membranes. Lovegrove Studio selected the most resonant, refining them collaboratively. The pinnacle: a physical prototype 3D-printed in metal, its intricate, vein-like structure evoking Lovegrove’s eco-futurist ethos while proving ergonomic viability. As Lovegrove reflected, “For me, the final result transcends the whole debate on design. It shows us that AI can bring something unique and extraordinary to the process.” Creative Director Ila Colombo added that the tool felt like “an extension of our studio,” blurring lines between artist and algorithm.

    Social media erupted with enthusiasm, with DeepMind’s announcement garnering over 50,000 views and praise from influencers like Evan Kirstel for “pushing design boundaries.” Yet, skeptics like @ai_is_mid quipped it’s “just…a chair,” questioning if AI truly innovates or merely iterates. Broader reactions, from LinkedIn designers to X threads, hailed it as “utopian potential,” echoing Lovegrove’s earlier 2025 interview on AI democratizing creativity.

    This unveiling signals AI’s maturation in creative fields, akin to CAD’s 1980s revolution but infused with generative flair. By personalizing models on individual styles, the tool lowers barriers for artists worldwide, promising faster prototyping and hybrid workflows. DeepMind envisions scaling it for broader applications—from furniture to architecture—where AI co-authors, not copies, human ingenuity. As Modem’s involvement underscores, such partnerships could redefine studios as interdisciplinary labs, fostering sustainable, boundary-defying designs in an era of rapid iteration.

  • Thinking Machines Launches Tinker API for Simplified LLM Fine-Tuning

    In a significant move shaking up the AI infrastructure landscape, Thinking Machines Lab—co-founded by former OpenAI CTO Mira Murati—unveiled Tinker API on October 1, 2025, its inaugural product aimed at democratizing large language model (LLM) fine-tuning. Backed by a whopping $2 billion in funding from heavyweights like Andreessen Horowitz, Nvidia, and AMD, the San Francisco-based startup, valued at $12 billion, positions Tinker as a developer-friendly tool to challenge proprietary giants like OpenAI by empowering users to customize open-weight models without the headaches of distributed training.

    At its core, Tinker is a Python-centric API that abstracts away the complexities of fine-tuning, allowing researchers, hackers, and developers to focus on experimentation rather than infrastructure management. Leveraging Low-Rank Adaptation (LoRA), it enables efficient post-training methods by sharing compute resources across multiple runs, slashing costs and enabling runs on modest hardware like laptops. Users can switch between small and large models—such as Alibaba’s massive Qwen-235B-A22B mixture-of-experts—with just a single string change in code, making it versatile for everything from quick prototypes to scaling up to billion-parameter behemoths.

    Key features include low-level primitives like forward_backward for gradient computation and sample for generation, bundled in an open-source Tinker Cookbook library on GitHub. This managed service runs on Thinking Machines’ internal clusters, handling scheduling, resource allocation, and failure recovery automatically—freeing users from the “train-and-pray” drudgery of traditional setups. Early adopters from Princeton, Stanford, Berkeley, and Redwood Research have already tinkered with it, praising its simplicity for tasks like aligning models to specific datasets or injecting domain knowledge. As one X user noted, “You control algo and data, Tinker handles the complexity,” highlighting its appeal for bespoke AI without vendor lock-in.

    The launch arrives amid a fine-tuning arms race, where OpenAI’s closed ecosystem extracts “token taxes” on frontier models, leaving developers craving open alternatives. Tinker counters this by supporting a broad ecosystem of open-weight LLMs, fostering innovation in areas like personalized assistants or specialized analytics. Murati, who helmed ChatGPT’s rollout at OpenAI, teased on X her excitement for “what you’ll build,” underscoring the API’s hacker ethos.

    Currently in private beta, Tinker is free to start, with usage-based pricing rolling out soon—sign-ups via waitlist at thinkingmachines.ai/tinker. While hailed for lowering barriers (e.g., “Democratizing access for all”), skeptics on Hacker News question scalability for non-LoRA methods and potential over-reliance on shared compute. Privacy hawks also flag data handling in a post-OpenAI world, though Thinking Machines emphasizes user control.

    Tinker’s debut signals a pivot toward “fine-tune as a service,” echoing China’s fragmented custom solutions but scaled globally. As Murati’s venture eyes AGI through accessible tools, it invites a collaborative AI future—where fine-tuning isn’t elite engineering, but everyday tinkering. With an API for devs and a blog launching alongside, Thinking Machines is poised to remix the model training playbook.

  • OpenAI Showcases Sora 2 Video Generation with Humorous Bloopers

    In a bold leap for generative AI, OpenAI unveiled Sora 2 on October 1, 2025, positioning it as a flagship model for video and audio creation that pushes the boundaries of realism and interactivity. Building on the original Sora’s text-to-video capabilities introduced in February 2024, Sora 2 introduces synchronized dialogue, immersive sound effects, and hyper-realistic physics simulations, enabling users to craft clips up to 20 seconds long at 1080p resolution. The launch coincided with the debut of the Sora app—a TikTok-like social platform for iOS (with Android forthcoming)—where users generate, remix, and share AI videos in a customizable feed. Available initially to ChatGPT Plus and Pro subscribers in the U.S. and Canada, it offers free limited access, with Pro users unlocking a premium “Sora 2 Pro” tier for higher quality and priority generation.

    What sets Sora 2 apart is its “world simulation” prowess, trained on vast datasets to model complex interactions like buoyancy in paddleboard backflips, Olympic gymnastics routines, or cats clinging during triple axels. Demos showcased photorealistic stunts: a martial artist wielding a bo staff in a koi pond (though the staff warps comically at times), mountain explorers shouting amid snowstorms, and seamless extensions of existing footage. The model excels at animating still images, filling frame gaps, and blending real-world elements, all while maintaining character consistency and emotional expressiveness. Audio integration is a game-changer—prompts yield videos with realistic speech, ambient soundscapes, and effects, transforming simple text like “Two ice-crusted explorers shout urgently” into vivid, voiced narratives.

    Central to the launch’s buzz are the “humorous bloopers”—delightful failures that humanize the technology and highlight its evolving quirks. OpenAI’s announcements openly acknowledge these, echoing the original Sora’s “humorous generations” from complex object interactions. In Sora 2 previews, a gymnast’s tumbling routine might devolve into uncanny limb distortions, or a skateboarder’s trick could defy gravity in absurd ways, reminiscent of early deepfake mishaps but rendered with stunning detail. These aren’t hidden flaws; they’re showcased as proof-of-progress, with researchers like Bill Peebles and Rohan Sahai demonstrating during a YouTube livestream how the model now adheres better to physical laws, reducing “twirling body horror” from prior iterations.

    The Sora app amplifies this with social features, including “Cameos”—users upload face videos to insert themselves (or consented friends) into scenes, fostering collaborative creativity. Early viral clips exemplify the humor: OpenAI CEO Sam Altman, who opted in his likeness, stars in absurdities like rapping from a toilet in a “Skibidi Toilet” parody, shoplifting GPUs in mock security footage, or winning a fake Nobel for “blogging excellence” while endorsing Dunkin’. Other hits include Ronald McDonald fleeing police, Jesus snapping selfies with “last supper vibes,” dogs driving cars, and endless Altman memes. The feed brims with remixes of Studio Ghibli animations, SpongeBob skits, and Mario-Pikachu crossovers, blending whimsy with eeriness.

    Yet, the showcase isn’t without controversy. Critics decry a flood of “AI slop”—low-effort, soulless clips risking “brainrot” and copyright infringement, as the model draws from protected IP like animated series without explicit sourcing details. Sora 2’s uncanny realism fuels deepfake fears: fake news reports, non-consensual likenesses (despite safeguards), and eroding reality boundaries. OpenAI counters with visible moving watermarks, C2PA metadata for provenance, and detection tools, plus opt-out for IP holders. CEO Sam Altman quipped on X about avoiding an “RL-optimized slop feed,” emphasizing responsible scaling toward AGI milestones.

    Ultimately, Sora 2’s bloopers-infused debut democratizes video creation, sparking joy through absurdity while underscoring AI’s dual-edged sword. As users remix Altman into chaos or craft personal epics, it signals a shift: from static tools to social ecosystems where humor bridges innovation and ethics. With an API on the horizon for developers, Sora 2 invites society to co-shape its future—laughing at the glitches along the way.

  • Google Meet Introduces “Ask Gemini” AI Assistant for Smarter Meetings

    Google Workspace has rolled out “Ask Gemini,” a new AI-powered meeting consultant integrated into Google Meet, designed to provide real-time assistance, catch up late joiners, and enhance productivity during video calls. Announced as part of Google’s ongoing AI expansions in Workspace, this feature leverages Gemini’s advanced capabilities to answer questions, summarize discussions, and extract key insights, making it an indispensable tool for business users and teams.

    Ask Gemini acts as a private, on-demand consultant within Meet, allowing participants to query the AI about the ongoing conversation without disrupting the flow. Powered by Gemini’s multimodal AI, it draws from real-time captions, shared resources like Google Docs, Sheets, and Slides (with appropriate permissions), and public web data to deliver accurate responses. For example, users can ask, “What did Sarah say about the Q3 budget?” or “Summarize the action items discussed so far,” and receive tailored answers visible only to them. This is particularly useful for multitasking professionals or those joining late, where it can generate a personalized recap of missed segments, highlighting decisions, action items, and key points.

    The feature builds on existing Gemini integrations in Meet, such as “Take Notes for Me,” which automatically transcribes, summarizes, and emails notes post-meeting. With Ask Gemini, note-taking becomes interactive: It links responses to specific transcript sections for deeper context, supports real-time caption scrolling during calls, and handles complex prompts like identifying trends or generating follow-up tasks. Available in over 30 languages, it also enhances accessibility with translated captions and adaptive audio for clearer sound in multi-device setups.

    To enable Ask Gemini, hosts must activate the “Take Notes for Me” feature at the meeting’s start, and it’s turned on by default for participants—though hosts or admins can disable it. Responses remain private, with no post-call storage of data to prioritize security and compliance (GDPR, HIPAA). It’s initially rolling out to select Google Workspace customers on Business Standard ($12/user/month), Business Plus ($18/user/month), Enterprise plans, or with the Gemini add-on, starting January 15, 2025, for broader access.

    Early feedback highlights its potential to save time—up to 20-30% on meeting follow-ups—while reducing cognitive load. In tests, it accurately recaps discussions and integrates seamlessly with Workspace apps, though some users note limitations in free tiers or for non-Workspace accounts (requiring Google One AI Premium). Compared to competitors like Zoom’s AI Companion or Microsoft Teams’ Intelligent Recap, Ask Gemini stands out for its deep Google ecosystem ties and real-time querying.

    Admins can manage it via the Google Admin console under Generative AI > Gemini for Workspace > Meet, toggling features per organizational unit. For personal users, subscribe to Google One AI Premium and enable Smart features in Meet settings. As hybrid work persists, Ask Gemini positions Google Meet as a leader in AI-driven collaboration, turning meetings into efficient, insightful experiences. To try it, join a Meet call and look for the Gemini icon in the Activities panel—future updates may include more languages and integrations.

  • Zoom AI Companion: Your Smart Note-Taking and Scheduling Assistant, a generative AI-powered digital assistant integrated into the Zoom Workplace platform

    Zoom has significantly enhanced its AI Companion, a generative AI-powered digital assistant integrated into the Zoom Workplace platform, to serve as an intelligent note-taking tool and smart scheduler for meetings. Launched in late 2023 and continually updated, AI Companion is now available at no extra cost with all paid Zoom subscriptions (including Pro, Business, and Enterprise plans), making it accessible for over 300 million daily Zoom users worldwide. This update, detailed in Zoom’s recent product announcements, positions AI Companion as a comprehensive productivity booster, automating tedious tasks like transcription, summarization, and calendar management to help teams focus on collaboration rather than administration.

    At its core, AI Companion acts as an AI note-taker that joins meetings automatically—whether on Zoom, Google Meet, Microsoft Teams, WebEx, or even in-person via mobile devices. It provides real-time transcription, generates comprehensive summaries, and identifies key highlights, action items, and decisions without requiring manual intervention. For instance, during a call, users can jot quick thoughts, and the AI enriches them by expanding on important points, pulling in context from discussions, documents, or integrated apps. Post-meeting, it delivers a structured summary including who spoke the most, emotional tone analysis (e.g., positive or tense), and searchable transcripts in over 32 languages (now out of preview as of August 2025). This eliminates the “caffeinated chipmunk” typing sounds of manual note-taking, allowing full participation while ensuring no details are missed—even if you’re double-booked or step away briefly.

    The smart scheduler functionality takes AI Companion further, transforming it into a proactive assistant. It analyzes meeting discussions to extract tasks and deadlines, then coordinates scheduling by checking calendars, suggesting optimal times, and even booking follow-up meetings directly. Integration with tools like Slack or Microsoft Teams allows automatic sharing of summaries and action items, streamlining team communication. For example, if a meeting uncovers next steps, AI Companion can draft emails, create to-do lists, or reschedule based on participant availability, reducing administrative overhead by up to hours per week. Advanced users can customize prompts for tailored outputs, such as generating reports in specific formats or integrating with CRM systems for sales teams.

    To get started, enable AI Companion in your Zoom settings (under Account Management > AI Companion), and it will auto-join eligible meetings. For third-party platforms, a Custom AI Companion add-on (starting at $10/user/month) extends full capabilities. Zoom emphasizes privacy, with opt-in controls, no data training on user content, and compliance with GDPR and HIPAA. Early adopters in education, sales, and consulting report 20-30% time savings, with features like multilingual support aiding global teams.

    While praised for its seamless integration, some users on Reddit note limitations in free tiers or complex customizations, recommending alternatives like Otter.ai or Fireflies for advanced analytics if Zoom’s native tool falls short. As AI evolves, Zoom’s updates— including expanded language support and agentic retrieval for cross-app insights—make AI Companion a frontrunner in meeting intelligence, rivaling standalone tools like tl;dv or Tactiq. For businesses, it’s a game-changer in hybrid work, turning meetings into actionable outcomes effortlessly.

  • Meta Ray-Ban Display Smart Glasses and Neural Band: The Future of Wearable AI

    At Meta Connect 2025 on September 17, Meta unveiled the Ray-Ban Display smart glasses, its first consumer AR eyewear with a built-in heads-up display (HUD), bundled with the innovative Meta Neural Band for gesture control. Priced at $799 for the set, these glasses represent a major evolution from previous audio-only Ray-Ban Meta models, bridging the gap to full augmented reality while maintaining Ray-Ban’s iconic style.

    The Ray-Ban Display features a monocular, full-color 600x600p HUD projected onto the lower right lens, visible only to the wearer with less than 2% light leakage for privacy. It supports apps like Instagram, WhatsApp, and Facebook, displaying notifications, live captions, real-time translations, turn-by-turn directions, and music controls. The 12MP ultra-wide camera (122° field of view) enables 3K video at 30fps with stabilization, photo previews, and a viewfinder mode. Open-ear speakers and a six-microphone array handle audio, while a touchpad on the arm and voice commands via Meta AI provide additional interaction. Weighing 69g with thicker Wayfarer-style frames in black or sand, the glasses include Transitions® lenses for indoor/outdoor use and support prescriptions from -4.00 to +4.00. Battery life offers 6 hours of mixed use, extending to 30 hours with the charging case.

    The standout accessory is the Meta Neural Band, a screenless, water-resistant EMG (electromyography) wristband that detects subtle muscle signals from brain-to-hand gestures—like pinches, swipes, taps, rotations, or virtual d-pad navigation with the thumb—without visible movements. It enables discreet control, even with hands in pockets or behind your back, and supports “air typing” by drawing letters on surfaces (e.g., your leg) for quick replies. With 18 hours of battery life, it fits like a Fitbit and comes in three sizes, making it ideal for seamless, intuitive interactions.

    Meta CEO Mark Zuckerberg described it as “the first AI glasses with a high-resolution display and a fully weighted Meta Neural Band,” emphasizing its role in ambient computing. The glasses connect via Bluetooth to iOS or Android devices, integrating with Meta apps for messaging, video calls (with POV sharing), and AI queries. While not full AR like the experimental Orion prototype, it overlays practical info on the real world, such as landmark details or navigation, without obstructing vision.

    Available in standard and large frame sizes starting September 30 at select US retailers like Best Buy, LensCrafters, Sunglass Hut, and Ray-Ban stores (with global expansion planned), the set includes the glasses and band in shiny black or sand. In-person demos are recommended for fitting. This launch accompanies updates to Gen 2 Ray-Ban Meta glasses ($379, with improved cameras and battery) and Oakley Meta Vanguard performance glasses ($499, launching October 21).

    Early reactions are enthusiastic. On X, tech builder Roberto Nickson (@rpnickson) called the Neural Band a “holy sh*t” moment, praising its intuitiveness but noting the display’s learning curve and AI’s room for improvement. Cheddar (@cheddar) shared a demo video, while @LisaInTheTrend highlighted real-time translation features. Hands-on reviews from The Verge and CNET describe it as the “best smart glasses yet,” though bulkier than predecessors, with potential to replace phones for errands once cellular is added. @captainatomIDC (@captainatomIDC) echoed the sentiment, predicting the end of the smartphone era.

    Meta’s push into AI wearables, with millions sold since 2023, challenges Apple and Google, betting on neural interfaces for the next computing paradigm. Privacy features like minimal light leakage and gesture subtlety address concerns, but experts note the need for developer access to evolve the platform. As AR evolves, the Ray-Ban Display and Neural Band could redefine daily interactions, blending style with ambient intelligence.

  • Meta Unveils Oakley Meta Vanguard: Performance AI Glasses for Athletes

    At Meta Connect 2025 on September 17, Meta and Oakley announced the Oakley Meta Vanguard, a new line of “Performance AI” smart glasses tailored for high-intensity sports and outdoor activities. Priced at $499, these wraparound shades blend Oakley’s iconic athletic design with Meta’s AI technology, positioning them as a direct competitor to action cameras like GoPro while integrating real-time fitness tracking and hands-free content creation.

    The Vanguard builds on the earlier Oakley Meta HSTN frames from earlier in 2025, which were more casual, by focusing on extreme performance needs. Featuring a 12MP ultra-wide camera (122-degree field of view) centered in the nosebridge for true POV capture, the glasses support 3K video at 30fps, electronic image stabilization, timelapse modes, and an action button for quick camera switches. Users can record mind-blowing moments hands-free during runs, bike rides, ski sessions, or workouts, with immersive open-ear audio for music or calls—up to 6 hours continuous playback or 9 hours daily use. The charging case adds 36 hours of battery life, and a full charge takes just 75 minutes, with IP67 water and dust resistance for rugged use.

    Powered by Meta AI, the glasses offer “Athletic Intelligence” features, including real-time queries for performance stats. Through integrations with Garmin watches and Strava apps, users can ask for heart rate, pace, or elevation data via voice commands like “What’s my current pace?” without glancing at devices. Captured videos can overlay metrics graphically and share directly to Strava communities. Oakley’s Prizm lenses—available in variants like 24K, Black, Road, and Sapphire—enhance contrast and visibility in varying conditions, with a three-point fit system and replaceable nose pads for secure, customized wear.

    Available in four color options (Black with Prizm 24K, White with Prizm Black, Black with Prizm Road, White with Prizm Sapphire), the glasses weigh 66g and evoke classic Oakley wraparounds, ideal for athletes but potentially bulky for everyday use. The Meta AI app manages settings, shares content, and provides tips, unlocking features like athlete connections. Pre-orders are live now via the Meta Store and Oakley.com, with shipping starting October 21 in the US, Canada, UK, Ireland, and select European and Australian markets.

    Meta CEO Mark Zuckerberg highlighted the partnership with EssilorLuxottica (Oakley’s parent) as advancing wearable tech for sports, with endorsements from athletes like Patrick Mahomes, who called them “something completely new.” Hands-on reviews praise the secure fit and potential to replace ski goggles or earbuds, though some note the polarizing style. On X, users like @TechEnthusiast42 shared excitement: “Finally, smart glasses that won’t fog up mid-run! #OakleyMetaVanguard,” while @VRDaily hyped integrations: “Garmin + Strava in AR? Game-changer for cyclists.”

    This launch expands Meta’s smart glasses lineup alongside Ray-Ban updates and display-equipped models, emphasizing AI for everyday athletics. As the metaverse evolves, the Vanguard could redefine how athletes capture and analyze performance, blending style, tech, and endurance.

  • Meta Horizon Hyperscape: Revolutionizing VR with Photorealistic Real-World Captures

    Meta has officially launched Horizon Hyperscape Capture (Beta), a groundbreaking VR tool that allows users to scan real-world environments using their Meta Quest 3 or Quest 3S headset and transform them into immersive, photorealistic digital replicas. Announced at Meta Connect 2025 on September 17, this feature expands on the initial Hyperscape demo from last year, bringing the “holodeck” concept closer to reality by enabling anyone to create and explore hyper-realistic VR spaces from everyday locations.

    Hyperscape leverages Gaussian splatting technology—a method that reconstructs 3D scenes from 2D images with high fidelity—to capture and render environments. The process is straightforward: Users point their Quest headset at a room or space for a few minutes to scan it, uploading the data to Meta’s cloud servers for processing. Within 2 to 4 hours, a notification arrives, and the digital twin becomes accessible in the Horizon Hyperscape VR app. Early demos showcased stunning recreations, such as Gordon Ramsay’s Los Angeles kitchen, Chance the Rapper’s House of Kicks sneaker collection, the UFC Apex Octagon in Las Vegas, and influencer Happy Kelli’s colorful Crocs-filled room. These spaces feel “just like being there,” with accurate lighting, textures, and spatial details that rival professional photogrammetry tools like Varjo Teleport or Niantic’s Scaniverse.

    Currently in Early Access and rolling out in the US (with more countries soon), the feature is free for Quest 3 and 3S owners via the Meta Horizon Store. It requires a strong Wi-Fi connection for cloud streaming and processing. At launch, captured spaces are personal only, but Meta plans to add sharing via private links, allowing friends to join virtual hangouts in your scanned environments—perfect for remote collaboration, virtual tourism, or reliving memories. Developers can also use it to build more realistic metaverse experiences, from education and real estate virtual tours to enterprise digital twins, reducing the cost and complexity of creating immersive content.

    The launch ties into broader Horizon updates at Connect 2025. Horizon Worlds now features faster performance via the upgraded Horizon Engine, enhanced 3D avatars, and generative AI for easier world-building. Horizon TV, Meta’s VR streaming app, is expanding with support for Disney+, ESPN, and Hulu, plus immersive effects for Universal Pictures and Blumhouse horror films like M3GAN and The Black Phone. A new fall VR game lineup includes Marvel’s Deadpool VR, ILM’s Star Wars: Beyond Victory, Demeo x Dungeons & Dragons: Battlemarked, and Reach.

    Reactions on X (formerly Twitter) are buzzing with excitement. VR enthusiast Mikaël Dufresne (@purplemikey) called Connect 2025 “impressive,” praising Hyperscape as “cool tech” alongside avatar upgrades. Japanese creator VR創世神 Paul (@VRCG_Paul) shared a hands-on video of scanning a room, noting four demo spaces but upload issues—common beta hiccups. NewsBang (@Newsbang_AI) highlighted its potential to justify Meta’s valuation amid Reality Labs’ investments, while Visit Japan XR (@visit_japan_web) emphasized tourism applications. Reddit’s r/OculusQuest community echoes this, with users bypassing US restrictions via VPN to test it, though some report black screen bugs now resolved.

    While promising, limitations include Quest 3 exclusivity (no Quest 2 support yet), processing delays, and privacy concerns over cloud uploads. Meta positions Hyperscape as a step toward a more tangible metaverse, blending physical and virtual worlds seamlessly. Download the demo or beta from the Meta Store to experience it—early adopters are already calling it a “glimpse of the future.”