Connect with us

NEWS

Chrome Quietly Put A 4GB AI Model On Your PC: Find And Kill It

Published

on

Four gigabytes. That is what Google Chrome quietly slid onto roughly a billion desktops over the past year, with no prompt, no checkbox, no notification. The file is called weights.bin. It lives in a folder named OptGuideOnDeviceModel inside your Chrome user data directory. And until February 2026, deleting it did nothing. Chrome would just pull it down again on the next restart.

The file is Gemini Nano, Google’s smallest on-device language model. Privacy researcher Alexander Hanff’s forensic write-up of the silent Gemini Nano install on May 4, 2026 set off the firestorm, but the install itself has been running since 2024. What followed is the strangest part of the story. The most visible AI feature in Chrome, the AI Mode pill in the address bar, doesn’t use the local model at all. Those queries fly straight to Google’s cloud.

Here is what is actually on your machine, why it landed there, and the exact clicks that get rid of it in May 2026.

What Chrome Actually Put On Your Computer

The artifact is a single binary. On Windows it sits at %LOCALAPPDATA%\Google\Chrome\User Data\OptGuideOnDeviceModel\weights.bin. On macOS it lives in the parallel path inside the Chrome profile folder, and on Ubuntu it appears in the equivalent Linux profile directory. The reported weight on disk runs between 3 and 4 gigabytes depending on the build.

Hanff caught the download in an unusually clean way. He spun up a brand-new Chrome profile on an Apple Silicon Mac, opened a single tab, and walked away. Then he combed the macOS .fseventsd kernel log, which records every file event at the system level. The full 4 GB model assembled itself in roughly fifteen minutes with zero human input.

The fact-checking outfit Snopes ran the same check on six staff laptops. Three of the six found the file present, two on macOS and one on Windows, matching Hanff’s path exactly. The model is not on every Chrome install. It is on every install that meets the spec.

The Spec Sheet Almost Nobody Saw

Chrome silently runs a hardware audit before pulling the model. The Chrome for Developers built-in AI requirements page lists the bar in plain text. If your machine clears it, the download starts.

  • Storage: at least 22 GB free on the volume holding the Chrome profile
  • GPU: strictly more than 4 GB of VRAM, or a fallback path of 16 GB system RAM plus four CPU cores
  • Operating system: Windows 10 or 11, macOS 13 or later, Linux, or Chromebook Plus
  • Network: an unmetered connection for the initial pull

The 22 GB headroom requirement is the strangest line on the page. The model itself ships at roughly 4 GB, so Chrome is asking for more than five times the file’s footprint just to start. Independent developers digging into the docs have flagged the gap without a clean answer from Google. Mobile is excluded entirely. Chrome for Android, Chrome for iOS, and ChromeOS on standard Chromebooks do not run Gemini Nano.

The AI Mode Paradox

Here is the part that lands hardest with privacy lawyers. The AI feature most Chrome users actually see, the AI Mode button in the omnibox, never touches the local model.

Those queries route to Google’s cloud servers. The 4 GB on your laptop powers a different, narrower set of features: the Help me write composer in text fields, on-device scam detection on incoming pages, smart paste, page summarization, and the Prompt API exposed to web developers. Decrypt’s reporting on Chrome’s deleted privacy promise caught Google quietly rewriting the help-page line that used to say data “is not sent to Google’s servers.” The new wording acknowledges that websites calling the on-device API can see inputs and outputs governed by their own policies.

Why Hanff Calls This Illegal In Europe

The legal frame Hanff is leaning on is Article 5(3) of Directive 2002/58/EC, the ePrivacy Directive. That is the same clause behind every cookie banner you have ever clicked through. It bars storing or accessing information on a user’s terminal equipment without prior, freely given, specific, informed, and unambiguous consent, with one narrow exception for what is strictly necessary to deliver a service the user explicitly asked for.

His argument is short. The 4 GB model is information. It sits on your terminal equipment. You did not consent. Chrome works fine without it. The strict-necessity carve-out does not apply.

This is, in my professional opinion, a direct breach of Article 5(3) of Directive 2002/58/EC, a breach of the Article 5(1) GDPR principles of lawfulness, fairness, and transparency, a breach of Article 25 GDPR’s data-protection-by-design obligation, and an environmental harm of a magnitude that would be a notifiable event under the Corporate Sustainability Reporting Directive.

Hanff told Snopes he intends to file criminal charges in his home jurisdiction. He has already issued a similar cease-and-desist to Anthropic over the Claude Desktop browser bridge, suggesting he sees a pattern across AI vendors rather than a single bad Google decision.

The potential exposure is not small. GDPR caps fines at the higher of 20 million euros or 4 percent of global annual revenue. For Alphabet that ceiling sits north of 12 billion dollars. No regulator has opened a formal probe yet, and the analysis is one privacy professional’s reading, not a court ruling. But Article 5(3) cases over silent device storage are exactly the cases EU regulators have been winning.

The environmental angle is the wildcard. Hanff estimated emissions of 6,000 to 60,000 tonnes of CO2 equivalent across a deployment in the hundreds of millions, a range wide enough to invite peer review but striking even at the low end.

Google’s Answer, Translated

Parisa Tabriz, Chrome VP and general manager, posted the company line on May 7. The on-device model, she said, exists to keep sensitive features like scam detection running without sending page contents to Google. The model frees itself up if storage runs low, and a settings switch to turn it off has been rolling out since February.

The full statement Google provided to Snopes ran on a similar track. It does not address the consent question Hanff raised at all. The wording defends what the model does. It does not defend how the model arrived.

How To Check And Remove It Right Now

Three minutes of clicking gets you to a clean answer. Open Chrome and paste chrome://on-device-internals into the address bar.

  1. Enable debugging. If the page warns that internal debugging is off, click the link it provides and tick the Enable internal debugging pages box.
  2. Open the Model Status tab. If the Foundational Model State reads No On-Device Feature Used, the model is not on your machine. You are done.
  3. If it shows installed, open Chrome’s main Settings menu, click the System tab, and look for an On-System AI or On-device AI toggle. Switch it off.
  4. The toggle deletes the file. Snopes staffers who flipped the switch saw the OptGuideOnDeviceModel folder vanish immediately, with no reinstall observed in the days that followed.
  5. If the toggle is missing, the rollout has not reached you yet. Open chrome://flags, search for optimization-guide-on-device-model, and disable it along with the Prompt API, Summarizer API, Writer API, Rewriter API, and Proofreader API flags before relaunching.

On managed Windows machines, IT teams can push the policy through a registry edit setting OptimizationGuideModelDownloading to disabled. The Google Chrome community thread on the OptGuideOnDeviceModel folder has been collecting user reports and workarounds since the file first started showing up.

One catch worth flagging. Tom’s Hardware reporters testing on Chrome v147 couldn’t see the toggle on a MacBook running the same version where it appeared on a Windows machine. The February rollout is gradual. If your Mac is missing the switch, the flags route is the only working path today. Check back after the next stable Chrome release before assuming you are stuck.

This story sits inside a wider pattern around silent AI deployment, the same fault line we covered when Utah’s SB73 VPN crackdown reshaped consumer privacy defaults earlier this month, and again when Indian regulators flagged on-device AI behavior in SEBI’s Claude Mythos cyber-suraksha task force. The browser is not a neutral pipe anymore. It has opinions about what should run on your hardware, and increasingly, it makes those decisions before asking.

Frequently Asked Questions

Is the 4 GB Gemini Nano file dangerous?

No. The file itself is not malware and does not exfiltrate data on its own. Inputs to the on-device APIs stay local unless a website chooses to call them, in which case that site’s privacy policy governs the data. The complaint is consent and disk usage, not malicious behavior. If you want it gone for principle or storage reasons, use the toggle in Settings, System, On-device AI, or disable the optimization-guide-on-device-model flag at chrome://flags.

Will Chrome redownload the file after I delete it?

It depends on how you removed it. Manually deleting the OptGuideOnDeviceModel folder without disabling the underlying setting triggers a fresh download on the next Chrome restart. Using the new February 2026 toggle in Settings, System, On-device AI stops the redownload according to Google and matches the behavior Snopes observed across multiple test machines. Confirm by reopening chrome://on-device-internals a day or two later and checking that Foundational Model State still reads No On-Device Feature Used.

Does this affect Chrome on my phone?

No. Gemini Nano in Chrome is desktop-only. 9to5Google’s breakdown of the 4GB Chrome AI storage rules confirms that Chrome for Android, Chrome for iOS, and standard ChromeOS Chromebooks are not part of the deployment. Only Windows 10 and 11, macOS 13 and later, Linux, and Chromebook Plus devices that clear the hardware bar receive the model. If you only use Chrome on a phone, you have nothing to remove.

Why does Chrome ask for 22 GB free when the file is only 4 GB?

Google has not given a public reason. Independent developers reading the requirements doc have flagged the gap as unexplained. The plausible answers are temporary build files during model assembly, future expansion headroom for larger Nano variants, or a defensive buffer to keep the OS itself stable on near-full disks. Until Google clarifies, treat 22 GB free as the trigger threshold. Drop below it and Chrome will refuse to install the model, or auto-delete an existing one.

Can my employer or school force the model onto a managed Chrome?

Yes, and they can also force it off. Chrome enterprise policy includes an OptimizationGuideModelDownloading registry key on Windows and equivalent plist controls on macOS. IT administrators can pin the model on, off, or leave it user-controlled. If you run a personal Chrome profile on a managed device, the policy your IT team sets overrides the in-browser toggle. Ask your admin if you cannot find the switch and the flags workaround also seems blocked.

The bigger lesson is the one the toggle does not solve. A browser that ships hardware-tier AI to a billion machines without a checkbox is making a product decision that used to require a conversation. Google may yet win the legal argument in Europe. The trust argument is harder, and Chrome will be defending it again the next time a 4 GB file lands on a desktop somewhere with no one’s permission but its own.

Logan Pierce is a writer and web publisher with over seven years of experience covering consumer technology. He has published work on independent tech blogs and freelance bylines covering Android devices, privacy focused software, and budget gadgets. Logan founded Oton Technology to publish clear, no nonsense tech news and reviews based on real hands on testing. He has personally tested and reviewed dozens of mid range and budget Android phones, written extensively about app privacy, and built and managed multiple WordPress publications over the past decade. Logan holds a bachelor's degree in English and studied digital marketing at a certificate level.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

NEWS

BofA Lifts Enovix Target To $7 As Honor Drops Killer Battery Test

Published

on

BofA Securities lifted its price target on Enovix Corporation to $7 from $6 on Tuesday, May 5, 2026, after the silicon-anode battery maker filed an 8-K saying it had reached alignment with Honor on a new qualification framework for its AI-1 smartphone cell. The rating stayed at Neutral. Enovix shares trade near $6.29, giving it a market cap around $1.34 billion.

The short version: Honor agreed to drop the 0.7C accelerated cycle-life test that Enovix could not pass, and to use a silicon-specific protocol instead. BofA reads that as confirmation Enovix won’t have to reformulate its battery to keep the deal alive, with details expected on the May 13 earnings call.

What BofA Actually Changed, And Why It Matters Now

The new $7 target is a 16.7% bump. It still sits at the low end of the Street, where the average 12-month target runs near $14.45 and the high call reaches $25, per Enovix consensus data on StockAnalysis. BofA kept its underlying estimates unchanged. The move is sentiment, not numbers.

Analyst Bill Peterson at JPMorgan went the other way the same week. He pulled his $6 December 2026 target and cut Enovix to Underweight from Neutral, arguing the volume ramp will keep slipping and that Enovix’s energy-density lead is narrowing faster than the market thinks.

So the two desks looked at the same 8-K and reached opposite conclusions. BofA sees an unblocked path. JPMorgan sees a delayed ramp into a market where silicon-carbon incumbents are catching up.

The Test That Was Killing The Deal

For most of 2025, one specific test kept Enovix’s smartphone story stuck. Honor uses a 0.7C accelerated cycle-life protocol, an industry-standard proxy that runs cells hard to estimate long-term life in weeks rather than years. Enovix’s AI-1 cells passed nearly every requirement, but not that one.

CEO Raj Talluri described the physics on the Q4 2025 earnings call. “When you change technology from graphite batteries to silicon anode batteries, silicon anode batteries behave differently when you discharge them very fast, in this 0.7C,” Talluri said, per the Motley Fool transcript of the Enovix Q4 2025 call. “Honor and other smartphone customers understand that. They realize that this test is a proxy and an accelerated test and not a true test.”

Internal Enovix data showed AI-1 cells exceed 1,000 cycles at 0.2C, the slower rate closer to how phones actually charge. Management had floated three exits: get the customer to accept 0.2C, agree on a new accelerated protocol, or change the chemistry. The 8-K confirms door number two.

Honor Is Not Just A Customer. It Is The Competition.

Here is the part most coverage skipped. Honor already builds its own silicon battery, and it is winning awards for it.

At MWC 2026 in Barcelona, Honor took home the GLOMO for Best Disruptive Device Innovation for the silicon-carbon Blade Battery in the Magic V6. The chemistry uses 25% silicon mixed into a carbon matrix, with a next-generation cell shown at the same event pushing silicon content to 32% and energy density to 985 Wh/L. Enovix’s headline number on AI-1 is 935 Wh/L, validated by a third party as the highest commercial figure for a 100% silicon anode.

On paper, Honor’s roadmap cell is denser. The catch is durability and form factor. Silicon-carbon hits a ceiling around 700 to 985 Wh/L because the silicon load is capped by swelling. Enovix’s 3D architecture mechanically constrains a pure silicon anode and limits cell-thickness swell to under 2%, which is why the company can sell a 7,350 mAh cell with fast-charge headroom for on-device AI.

Honor’s silicon-carbon Blade is an evolutionary win on density. Enovix’s AI-1 is a structural bet that 100% silicon scales further, charges harder, and survives more cycles once OEM testing catches up to the new chemistry.

That framing changes how you read the qualification deal. Honor is not adopting Enovix because it lacks a battery. Honor wants a second source with a different physics envelope, one that pure silicon-carbon cannot match for AI workloads that hammer the cell at high discharge rates.

The Numbers Heading Into May 13

Enovix reports first-quarter 2026 results after the close on Wednesday, May 13. The Q4 2025 print, posted in late February, set the baseline.

  • $11.3 million in Q4 2025 revenue, up 16% year over year and above the $10.5 million top of guidance.
  • $31.8 million in full-year 2025 revenue, a 38% increase, driven by defense and industrial shipments out of Enovix’s full-year 2025 results release from its Korean subsidiary Routejade.
  • 23% non-GAAP gross margin for the full year, with Q4 hitting 26%.
  • $621 million in cash, equivalents and marketable securities at year-end, with a $75 million buyback authorized.

For the first quarter the company guided revenue of $6.5 million to $7.5 million, a sharp sequential drop the company tied to seasonal defense program timing rather than a smartphone signal. Non-GAAP operating loss guidance runs $29 million to $32 million.

BofA said it will be listening for one thing above all else: whether the new test framework changes expected revenue for fiscal 2026 or fiscal 2027. Analysts polled by InvestingPro still see 28% revenue growth this year, layered on top of last year’s 38%, but profitability is not in the model.

The Hire Buried In The Same 8-K

The same filing that announced the Honor framework also disclosed a new sales chief. Enovix appointed Steve Bakos as Senior Vice President of Worldwide Sales, a newly created role reporting to Chief Business Officer Samira Naraghi. Bakos arrives from Infineon Technologies, where he ran corporate account sales for global accounts including Apple.

Read that hire next to the customer list. Enovix already says it is engaged with seven of the top eight global smartphone OEMs. The Apple-account pedigree on Bakos’s resume is not a coincidence on a week when management is trying to convince Wall Street that the qualification door is now open beyond Honor.

The Real Risk Is The Factory, Not The Lab

Even if the May 13 call confirms a clean path through Honor’s new test, the harder constraint is in Penang, Malaysia. Enovix’s Fab2 line runs nine process steps. Eight of them yield above 80%. The ninth, laser dicing of the electrode ribbons, runs slower than the rest of the line and gates throughput.

Talluri laid it out plainly on the Q4 call. The line is functional. The dicing step is the rate limiter. Engineers are testing multiple laser types, and there is a Plan B involving a custom mechanical punching tool. The smartphone qualification process with Honor includes an explicit contingency for an optimization path that would shift production ramp into the second half of 2026.

What Could Go Right

If the new silicon-specific protocol holds, AI-1 cells start shipping into Honor handsets in the back half of 2026, with the smart-eyewear opportunity, where qualification thresholds are lower, ramping in parallel. Enovix has already sampled multiple major eyewear OEMs and laid out a $400 million eyewear battery TAM by 2030 in Seeking Alpha’s coverage of Enovix’s smart eyewear pipeline.

What Could Go Wrong

The new test takes longer to run by design. BofA flagged that explicitly. If results come in flat against the longer protocol, or if Penang dicing yields stay stuck, Enovix burns more of its $621 million cushion before a single dollar of smartphone revenue books. Defense shipments will keep the lights on. They will not justify a $1.34 billion market cap.

How The Stock Sets Up Into Earnings

Shares are flat year to date at the $6.29 level, well off the 2024 highs. The setup into Wednesday’s print is asymmetric. A clean walk-through of the Honor framework, plus any color suggesting fiscal 2026 revenue does not slide further, would close the gap between BofA’s $7 floor and the $14.45 Street average. A deferred ramp, or hedged language on the new test timeline, would give JPMorgan’s downgrade thesis the data it needs.

Notice what is not in any analyst note this week: a price target tied to smart eyewear or defense alone. The entire equity story is still keyed to one Chinese smartphone OEM signing off on a battery format that has never shipped before in a phone.

Frequently Asked Questions

What did BofA actually do to Enovix’s price target?

BofA Securities raised its price target on Enovix from $6 to $7 on May 5, 2026, while keeping a Neutral rating. The bump was 16.7%. BofA did not raise its underlying earnings or revenue estimates. The move was driven entirely by the 8-K disclosing a new qualification framework with Honor. BofA says it will reassess the model after Enovix’s Q1 2026 earnings call on May 13.

Why was the 0.7C cycle-life test such a big deal?

The 0.7C accelerated test runs a battery hard to estimate long-term cycle life quickly. Enovix’s pure silicon-anode AI-1 cells exceed 1,000 cycles at the slower 0.2C rate but were missing the 0.7C target. Without an alternate protocol, Honor could not formally qualify the cell, which would have blocked any 2026 smartphone revenue. The new silicon-specific test framework removes that block without forcing Enovix to change its chemistry.

Is Honor not already using silicon batteries?

Honor uses silicon-carbon batteries, which blend roughly 25 to 32% silicon into a carbon matrix. The Magic V6 won a GLOMO at MWC 2026 for that design. Enovix’s AI-1 uses a 100% active silicon anode in a patented 3D architecture. The two technologies are not interchangeable. Honor wants Enovix as a second source for AI-class workloads where silicon-carbon hits a density and discharge ceiling.

When does Enovix report Q1 2026 earnings?

Enovix reports first-quarter 2026 results after the market close on Wednesday, May 13, 2026, with a conference call to follow. Guidance issued in February calls for Q1 revenue of $6.5 million to $7.5 million and a non-GAAP operating loss of $29 million to $32 million. BofA, JPMorgan, and Oppenheimer have all said they will be listening specifically for updates on the new Honor test framework and any change to fiscal 2026 and fiscal 2027 revenue assumptions.

Should I treat this as a buy signal on ENVX?

No. BofA explicitly kept a Neutral rating, citing manufacturing hurdles, qualification timing, and expected negative margins and cash flow for several years. JPMorgan moved the other direction with an Underweight downgrade. Wall Street’s range on the stock spans $6 to $25, which signals genuine disagreement about commercial timing. Wait for the May 13 call before drawing conclusions, and read the full risk profile in the company’s most recent 10-K.

The lab story is closer to a resolution than it has been in two years. The factory story is not. Enovix shareholders have spent most of 2026 watching the same gap between technology validation and commercial scale, and Wednesday’s call is the next chance to find out which side wins. BofA’s $7 print is a vote that the gap is closing. JPMorgan’s downgrade is a vote that it is not closing fast enough.

Disclaimer: This article reports on analyst price targets, company filings, and earnings guidance and does not constitute investment advice. Equities in early-commercialization battery companies carry significant risk, including manufacturing delays, qualification failures, and substantial cash burn. Price targets cited are accurate as of publication on May 9, 2026, and may change without notice. Readers should consult a licensed financial advisor and review Enovix’s most recent SEC filings before making any investment decision.

Continue Reading

AI

Korea’s AI Basic Act Goes Live With $20K Fine Cap and 10^26 Wall

Published

on

Twenty thousand US dollars. That is the maximum administrative fine Korean regulators can issue against an AI company that breaks the country’s first national AI law, which entered force on 22 January 2026.

The AI Basic Act, formally the Act on the Development of Artificial Intelligence and Establishment of Trust, makes South Korea the second jurisdiction after the European Union to publish a comprehensive risk-based AI statute. Korea’s Ministry of Science and ICT (MSIT) will run a one-year fine grace period through January 2027, deferring penalties while operators line up compliance. The law covers AI developers and AI-using business operators in Korea, plus foreign firms whose systems reach Korean users above set thresholds. Frontier models trained on 10^26 floating-point operations or more sit in a separate safety bucket almost no domestic player can hit.

That last detail is the part most foreign coverage skipped. Strip out the cumulative-compute language and a regulatory wall remains that almost every Korean lab walks under.

Who Falls Inside the Net

The Act applies to anyone the law calls an AI business operator, and MSIT’s January decree splits that into two categories. AI developers build, train or sell AI models. AI-using business operators deploy AI inside their own products or services for Korean users. Both face obligations, though the heavier ones cluster on developers.

MSIT’s decree extends jurisdiction to foreign companies whose AI services reach Korean residents. There is no carve-out for offshore-only firms. If a US-based generative model serves chat queries to Korean accounts, the operator is on the hook the moment it crosses the local-presence thresholds.

What the Act does not do, according to Omdia’s January 2026 regulatory note on the Korean AI Basic Act, is reach the end-user. The EU’s law touches deployers and users alike. Korea’s stops at the developer and the business deploying the model. End consumers stay outside the framework.

The MSIT English-language summary of the Basic Act defines the regulated entity as any operator engaged in business “related to the AI industry,” a phrasing wide enough to bring in cloud platforms, model fine-tuners and chatbot integrators in a single sweep.

Three Tracks, Different Rules

The Act runs three parallel obligation regimes, and the decree clarifies which class of system catches which set of duties. Generative AI systems must label outputs and notify users they are interacting with AI. High-impact systems deployed in critical sectors must document risk, log decisions and provide human oversight. Frontier high-performance models must file safety plans with MSIT and report life-cycle risk outcomes.

Track Trigger Core Duty
Generative AI Output reaches Korean users AI-use disclosure, output labeling
High-Impact AI Healthcare, energy, transport, public services, hiring, education, finance Risk assessment, human oversight, documentation
High-Performance AI Cumulative training compute at or above 10^26 FLOPs Safety plan, MSIT reporting, user-protection measures

Sector lists for the high-impact track will sit inside ministerial sub-rules due over the next several months. Cooley’s 27 January client alert on the AI Basic Act warned operators not to assume their sector is safe until the relevant ministry publishes its specific guidance.

The Compute Wall That Excludes Most of Korea

The 10^26 FLOPs threshold is the Act’s headline number, and almost no Korean firm is anywhere near it. Frontier US labs cleared that ceiling around 2024. Naver’s HyperCLOVA X family and LG’s EXAONE series, the country’s two biggest domestic foundation models, sit at least one order of magnitude below.

That gap matters. The decree’s safety regime, the most stringent of the three tracks, only fires when a model crosses both 10^26 FLOPs and a significant impact on life, physical safety, public safety, or fundamental rights. Both conditions, not either. ITIF’s September 2025 report on Korean AI policy, written by analysts Hodan Omaar and Daniel Castro, argued the safety bar is high enough in practice that domestic enforcement falls almost entirely on US frontier developers serving Korean users.

The ITIF brief made one point that local commentary has avoided: Korea’s safety regime is configured against compute scale rather than deployment context. A small model fine-tuned for a sensitive medical use can hide under the threshold. A much larger general-purpose model with no clinical exposure trips it.

Compute thresholds are a design choice the EU made too, with its 10^25 FLOPs trigger for general-purpose models with systemic risk. Korea pushed the bar an order of magnitude higher. Whether that gap reflects domestic frontier capability or a quiet decision to keep Korean labs outside the safety perimeter is the live policy question.

Foreign vendors should expect the threshold to draw the most attention from MSIT inspectors during the grace period. The ministry has every incentive to show the safety regime has teeth, and US labs are the only realistic test subject.

The Domestic Representative Trigger

Foreign AI operators without a Korean address must appoint a domestic representative once they cross any one of three quantitative thresholds. The decree fixes those thresholds in clear numbers.

  • KRW 1 trillion in total annual revenue in the previous year, roughly $720 million at May 2026 exchange rates.
  • KRW 10 billion in AI-services revenue in the previous year, about $7.2 million.
  • One million daily active Korean users averaged over the three months before year-end.

The local agent must hold a registered Korean address and respond to MSIT inquiries on the foreign operator’s behalf, including safety-measure submissions for frontier models and high-impact-status confirmations. The US Department of Commerce trade.gov market briefing on the Korean AI Basic Act flagged the third trigger as the one most likely to catch US generative-AI vendors with consumer footprints.

Fines That Cap at KRW30 Million

The penalty ceiling is the single largest gap between Korean and EU enforcement. KRW30 million, about $20,300 at current rates, is the maximum administrative fine. It applies to failure to disclose AI use, failure to appoint a domestic representative, and refusal of MSIT inspections.

Compare that to the EU AI Act’s 7% global-turnover ceiling, which can reach roughly $38 million for prohibited-practice violations. A single Korean fine would not buy a frontier developer one day of training compute.

MSIT has signaled enforcement will lean on corrective orders rather than fines for the first 12 months. Where a service threatens safety, the ministry can order suspension under the Act’s enforcement decree, a power that bites even when the cash penalty does not.

Critics inside the Korean bar have called the fine ceiling symbolic. Supporters say a soft launch builds compliance muscle without choking a domestic AI sector still chasing US and Chinese rivals on capital and talent.

Where Seoul Broke From Brussels

The Basic Act borrows the EU’s risk-based architecture but breaks from it on three structural choices. Korea publishes no list of banned AI uses. The EU bans eight outright, including social scoring and untargeted facial-recognition scraping. Korea also writes no general-purpose AI category and no copyright-compliance language for training data.

Innovation-led, not rights-led. That is how the Future of Privacy Forum’s analysis of the Korean AI Framework Act framed the difference. The EU starts from a fundamental-rights baseline. Korea starts from an industrial-policy baseline and adds risk controls on top.

Korea’s broader strategy pairs regulation with KRW100 trillion in announced AI infrastructure spending through 2027, the Library of Congress Global Legal Monitor entry on the Korean AI legal framework noted. Read together, the message to operators is straightforward: build here, ship here, and the regulatory cost will stay light enough to absorb.

Frequently Asked Questions

Do I Have to Appoint a Korean Representative if My AI Service Has Korean Users?

Only if you cross one of three thresholds. Total annual revenue above KRW1 trillion, AI-services revenue above KRW10 billion, or one million daily Korean users averaged over the three months before year-end. If you sit below all three, no domestic representative is required, though MSIT may still ask for safety information through other channels. Threshold questions go through the official AI Basic Act portal.

When Will MSIT Start Issuing Actual Fines?

Not before 22 January 2027. MSIT confirmed a one-year grace period during which the ministry will use corrective orders and guidance instead of financial penalties. Suspension orders for safety-threatening services remain available immediately. Operators should treat 2026 as a remediation year, document compliance work in writing, and budget for active fine exposure starting in early 2027.

Does the Act Apply to My Open-Source Model?

Probably yes, if the model is offered to Korean users in any commercial form, including hosted APIs and paid fine-tuning services. The law defines covered entities by business activity, not licensing model. Pure non-commercial research releases may sit outside the scope, but the decree does not carve them out explicitly. Track MSIT’s sector guidance and watch for upcoming open-source clarifications expected in mid-2026.

What Counts as a High-Impact System?

AI deployed in healthcare diagnostics, energy and utilities operations, transport-safety functions, public-service delivery, hiring decisions, educational evaluation, and finance-related credit and risk scoring. The full sector list is being finalized through ministerial sub-rules across 2026. If your system touches any of those areas, assume it is high-impact and start documenting risk-management procedures now rather than waiting for the final list.

How Much Compute Triggers the Frontier Safety Track?

Cumulative training compute of 10^26 floating-point operations or more, combined with a system that materially affects life, safety, or fundamental rights. Both conditions must apply. As of May 2026, no Korean foundation model is publicly known to clear 10^26 FLOPs. The threshold mostly catches large US frontier labs serving Korean accounts, not domestic developers.

MSIT’s decree clarifies the law more than the law clarifies itself, and that pattern will hold through 2026 as the ministry publishes sector-by-sector sub-rules. Operators that wait for full text to lock before starting compliance work will burn the grace period.

The bigger question for foreign capitals watching Seoul is whether Korea’s lighter-touch model becomes a template for other Asian markets. Japan, Singapore and Indonesia have all signaled they want a regulatory floor that does not strangle domestic AI sectors before those sectors grow. Korea has just shown them what that floor looks like.

Disclaimer: This article reports on South Korea’s AI Basic Act and accompanying presidential decree as of May 2026 and does not constitute legal advice. Regulatory thresholds, sector definitions, and ministerial sub-rules remain subject to revision throughout the 2026 implementation period. Operators with potential Korean exposure should consult licensed Korean counsel before relying on any specific threshold, fine ceiling, or compliance interpretation cited here. Currency conversions reflect rates accurate at publication and may shift.

Continue Reading

GADGETS

Apple AirPods With Cameras Hit Final Test Stage, Siri Holds Up Launch

Published

on

Apple has pushed its camera-equipped AirPods into the final development stage before mass production, according to a Bloomberg report by Mark Gurman published May 7. Engineers inside Cupertino are now testing prototypes at the design validation testing phase, known internally as DVT. That’s the second-to-last gate before production validation, and it usually runs three to six months.

The earbuds carry low-resolution cameras in both stems. They aren’t built to shoot photos or video. They’re built to feed a visual stream to Siri so the assistant can see what the wearer sees, identify objects, read environments, and answer questions about them. Gurman’s sources say Apple may brand the device AirPods Ultra and price it above the $249 AirPods Pro 3.

And here’s the catch. The hardware is nearly done. The software isn’t. Apple wanted to ship these earbuds in the first half of 2026. That window is gone, and the reason has nothing to do with the cameras.

Why The Hardware Is Ready But The Launch Isn’t

DVT is a specific milestone. Apple’s prototypes at this stage carry near-final industrial design, near-final internal components, and near-final firmware. The next step is PVT, where contract manufacturers like Luxshare or Foxconn run small batches on the actual production line to expose tolerance issues. After that, mass production starts.

So the engineering side is on schedule. The blocker is Siri. 9to5Mac’s coverage of the Bloomberg scoop notes Apple’s overhauled, LLM-powered Siri is now slated for September alongside iOS 27, macOS 27, and iPadOS 27. Without that Siri, the cameras have nothing intelligent to talk to. A pair of earbuds that can see your kitchen counter is useless if the voice assistant attached to it can’t tell a tomato from a tangerine.

Gurman’s sources put it bluntly: concerns about the AI features could push the launch further if Apple isn’t satisfied with the visual intelligence layer. That phrasing matters. It’s the same phrasing Apple used internally before delaying the personalized Siri features announced at WWDC 2024.

The Four-Year Backstory

The project started inside Apple in 2022. Ming-Chi Kuo’s May 2025 supply chain note first laid out the production timeline, calling for mass production in 2026 with a possible slip to 2027 if battery life or thermal constraints proved harder than expected. Kuo also flagged a custom chip codenamed Glennie meant to handle the visual processing on-device.

Bloomberg first reported the camera AirPods existed in February 2024. Kuo confirmed the project four months later. Then Apple killed a parallel project: an Apple Watch with a built-in camera, scrapped quietly last year. The Watch camera died. The AirPods camera survived. That tells you where Apple thinks the AI wearable category lives.

What The Cameras Actually Do

The cameras feed Siri. That’s the entire pitch. Ask Siri what’s in your fridge while wearing the earbuds, and the visual stream goes to Apple’s servers, gets parsed, and comes back as a recipe suggestion. Walk past a building, ask what it is, and the camera handles the lookup. Get turn-by-turn directions that update based on what’s actually in front of you, not just GPS coordinates.

  • Object recognition for groceries, books, packaging, signage, plants, and household items.
  • Contextual reminders triggered by what the camera sees, like medication on a counter or keys on a hook.
  • Enhanced navigation that supplements GPS with visual landmarks, pulled live from the user’s surroundings.
  • Vision Pro pairing, where head-direction data sharpens spatial audio inside Apple’s headset.

An LED indicator lights up when the cameras are active. That’s Apple’s headline privacy feature. How visible the LED actually is on a stem-mounted earbud, and how many strangers will notice it, remains the open question. The Mac’s green webcam light works because you stare at the screen. An LED tucked under your earlobe is a different physics problem.

Apple Walks Into A Market Meta Already Owns

The competitive picture is brutal. Meta’s Ray-Ban smart glasses captured between 75 and 80 percent of the smart-eyewear market in 2025, with more than seven million units sold. The TechBuzz analysis of Meta’s wearables performance reports the company plans to double smart-glasses production capacity by the end of 2026 while cutting its VR budget. The glasses are working. The headsets aren’t.

OpenAI is pushing into the same lane. Sam Altman’s company paid $6.4 billion last year for Jony Ive’s design startup io and is now building a screenless, voice-first AI device targeting initial production of 40 to 50 million units through Foxconn. Court filings cited by Adweek’s review of the OpenAI hardware litigation indicate the first device won’t be wearable, but earbud-style and pen-style follow-ups are in development under codenames Sweetpea and Gumdrop.

The Three-Way Race In One Table

Player Form factor AI assistant Status
Meta Ray-Ban and Oakley smart glasses Meta AI Shipping, 7M+ units sold
Apple AirPods with stem cameras Next-gen Siri (Sept 2026) Late testing (DVT)
OpenAI / Jony Ive Screenless device, then earbuds ChatGPT H2 2026 target, delays reported
Motorola AI pendant (concept) Moto AI CES 2026 reveal
Amazon Bee wearable (acquired 2025) Alexa+ Wrist and lapel form factors

The pendant category itself is a graveyard. Humane’s AI Pin launched to brutal reviews in 2024 and was discontinued within a year. Friend, the AI necklace startup, became the punchline of New York subway graffiti, with riders writing Go make some real friends across its ads. Apple is entering a category that has burned every company that came before it.

The Privacy Problem Apple Is About To Inherit

Camera-equipped wearables have already created legal exposure. TechCrunch reported in March on a class action filed against Meta after a Swedish newspaper investigation found that workers at a Kenya-based subcontractor were reviewing customer footage. The reviewed material included nudity, sex, and footage of people using the toilet. The U.K. Information Commissioner’s Office opened its own investigation. Meta said faces were blurred. Sources told reporters the blurring didn’t always work.

Consumer expectations regarding privacy haven’t gone away entirely, but they are shifting. We’re already being surveilled by billions of smartphones, city camera networks and smart devices that we willingly placed in our homes.

That’s Avi Greengart, lead analyst at Techsponential, on why Meta keeps selling glasses despite the lawsuits. Greengart told reporters he doesn’t expect AI wearables to replace smartphones soon, but does expect them to land alongside watches, rings, and glasses as standard kit. His framing matters because it’s the bull case. The bear case is the Kenyan subcontractor.

Apple’s privacy track record is genuinely better than Meta’s. The company processes most Siri requests on-device, encrypts the rest, and runs cloud workloads through Private Cloud Compute. But the moment cameras enter the picture, the data profile changes. Visual data is harder to anonymize than text. A blurred face is still a body, a tattoo, a uniform, a setting. Apple will have to explain, in detail, what gets sent to the cloud, what stays on the device, what gets deleted, and who reviews edge cases.

Why Google Glass Still Matters

The 2013 backlash against Google Glass set the template. Bars banned wearers. The word Glasshole entered the dictionary. The product died. Meta’s Ray-Bans survived where Glass didn’t because they look like sunglasses, not goggles, and because Meta marketed them as a Ray-Ban product first and a camera second.

Apple’s bet is similar. AirPods are already in the wild on hundreds of millions of ears. Adding cameras to a familiar object is less alien than strapping a screen to someone’s face. Whether that’s enough cover when the cameras are pointed at strangers in coffee shops is the question every reviewer will ask in the first week.

Pricing And Branding Signal Where Apple Is Aiming

Gurman’s sources say the device will sit above $249. The AppleInsider read of Bloomberg’s pricing intelligence notes that AirPods Ultra branding would let Apple introduce a new tier without disrupting the AirPods Pro 3 lineup. Apple last spun out an Ultra brand for the Apple Watch Ultra in 2022, where the Ultra commands a roughly two-times premium over the standard Watch.

Applied to AirPods, that math suggests a price band somewhere between $349 and $449. Bloomberg hasn’t confirmed a specific figure. But the Ultra naming convention and the cost of adding cameras, IR sensors, and a custom processing chip make a $249 price untenable.

Stats That Frame The Bet

  • 4 years of internal development before reaching DVT.
  • $249 floor price for current AirPods Pro 3, the launchpad for Ultra pricing.
  • 75-80% of the smart-glasses market currently held by Meta.
  • 40-50 million units targeted by OpenAI for its first AI device.
  • September 2026 earliest realistic launch window if Siri ships on time.
  • 7 million+ Meta smart glasses sold in 2025, the comparison set Apple has to beat.

The Timeline From Here

  1. May 2026: DVT prototypes confirmed in Bloomberg report.
  2. Summer 2026: PVT batches expected at contract manufacturers.
  3. September 2026: iOS 27 launch with new Siri, the earliest plausible AirPods Ultra debut.
  4. Late 2026 or H1 2027: Realistic ship date if Siri features pass internal review.
  5. 2027: Lighter AirPods Max refresh, separately, per Kuo’s roadmap for Apple’s audio lineup through 2027.

One detail worth flagging for Apple Vision Pro owners: Kuo previously reported that the camera AirPods would integrate with Vision Pro to enhance spatial audio. Turn your head toward a sound source in a video, and the audio profile shifts to emphasize that direction. That’s a feature pair, not a coincidence. Apple is building hardware that compounds across its product line, the same way the H2 chip ties the Watch and AirPods together for hearing-aid features.

For broader context on how on-device biometric sensing is migrating across product categories, see our coverage of Samsung’s Sensor OLED panel that reads pulse and blood pressure through the display. The thread is the same: sensors disappear into devices people already own.

Frequently Asked Questions

When will AirPods with cameras actually go on sale?

The earliest realistic window is September 2026, alongside iOS 27 and the new Siri. Bloomberg reports the hardware is in design validation testing, which typically runs three to six months before production. But Apple has tied the launch to its overhauled Siri, and any delay to that software project pushes the AirPods Ultra into late 2026 or the first half of 2027.

How much will AirPods Ultra cost?

Above $249. Apple hasn’t confirmed a price, but Gurman reports Ultra branding and a premium positioning over the AirPods Pro 3. Based on how Apple priced the Apple Watch Ultra at roughly two times the standard Watch, a $349 to $449 range is the most credible estimate. Final pricing won’t be public until Apple’s official launch event.

Can the cameras take photos or record video?

No. The cameras are low-resolution modules that feed visual data to Siri for object recognition, contextual reminders, and navigation. They cannot capture or store photos or video for the user. An LED indicator on each earbud lights up when the cameras are active, similar to the green light on a Mac webcam.

Will AirPods Ultra work with non-Apple phones?

The visual features are tied to Siri and Apple Intelligence, which only run on iPhones, iPads, and Macs. Standard Bluetooth audio playback should work with Android phones, as it does with current AirPods, but the camera features and AI integrations will not. If you’re on Android, you’re getting expensive earbuds without the headline feature.

Are camera-equipped earbuds a privacy risk for people around me?

The cameras don’t record video, but they do capture environment data and send it to Apple’s servers for processing. Apple’s privacy stance is stronger than Meta’s, and the LED indicator signals when cameras are active. Still, anyone uncomfortable being scanned by a stranger’s earbuds has a legitimate concern. Local laws on consent recording vary, and Apple has not yet detailed its data retention policies for visual data.

What happens if Apple’s new Siri isn’t ready?

The launch slips. Bloomberg’s sources explicitly tied the AirPods Ultra release to the AI Siri rebuild, and Apple has already pushed personalized Siri features once. If Siri 2.0 misses September 2026, expect AirPods Ultra to follow it into 2027. The earbuds without the assistant are just expensive AirPods with extra hardware nobody can use.

The story to watch over the next six months isn’t the earbuds. It’s Siri. Apple has built the body. The brain has to ship for any of this to matter, and Cupertino has missed that deadline before.

Continue Reading

Trending