AI
Google AI Overviews Adds Subscribed Label, Reddit Quotes Inline
Google added a “Subscribed” label to AI Overviews and AI Mode citations on Wednesday, a green flag that lights up only when a reader has linked a paid publisher subscription to their Google account. The same announcement let Reddit quotes start appearing inside search results through a new Expert Advice panel. Two pitches sit inside one rollout, aimed at very different audiences.
The Mountain View company also added hover previews that show the publisher name and site title before a click, plus a Further Exploration panel that surfaces deeper analysis below AI summaries. Google says paying readers click flagged links more often in early tests. Publishers, meanwhile, have watched search referral traffic crater across 2025.
The Four Changes Inside Wednesday’s Update
Wednesday’s announcement bundles four discrete features on top of AI Overviews and AI Mode. Each targets a friction point users have raised since AI Overviews launched in May 2024. None addresses the root publisher complaint about disappearing referrals.
The Subscribed label is the headline addition. It surfaces only for users who connect a paying subscription to their Google account through the company’s existing subscription linking program. Hover previews now show the publisher’s name and site title before a reader commits to a click. A new related-topics panel called Further Exploration sits below the AI summary, pointing to deeper analysis. And an Expert Advice panel imports user-generated commentary from Reddit and similar forums, attributed by handle and subreddit.
Google said the Subscribed label helps “you quickly access the content you trust and get more value from your subscriptions.” The company said paying readers in early tests were “significantly more likely” to click flagged links over equivalent unflagged ones. Both phrasings sidestep the question of how much aggregate traffic the new label can move.
- Subscribed label: green flag on citations from publications a user pays for.
- Hover previews: publisher name and site title appear before the click.
- Further Exploration: related-topics panel below the AI summary linking to deeper analysis.
- Expert Advice: imported quotes from Reddit and other forums with handle attribution.

Why The Subscribed Label Helps Few Publishers
The Subscribed label only fires for users who have actively connected a paid subscription through Google’s subscription linking program. That is a narrow base. Subscription linking has existed since 2018 and remains opt-in on both the user and publisher side, with adoption concentrated among the largest news brands.
Google encouraged publishers in the same blog post to “reach out” about how to push paying readers toward linking accounts. The encouragement reads less like a feature announcement and more like a recruitment drive. Publishers without scale, without paywalls, or without subscriber relationships in Google’s account graph collect zero benefit.
The math is harsh for everyone outside the top tier. A small lifestyle publisher with no paywall watches the Subscribed label do nothing on its citations. Oton Technology’s catalog of 500 niche publishers still earning through Google ad properties shows how thin the margin already runs at that tier. Same story for an independent technology blog, a regional newsroom that gives away digital, or any aggregator. Their pages still appear in AI Overviews. Just without the green flag Google’s own data says drives clicks.
The lift, even where it applies, sits on top of the broader traffic decline. A paying subscriber who clicks slightly more often does not refill the bucket of free readers Google’s AI summaries no longer route to publisher sites. Subscription linking is a wedge for the few. It isn’t a recovery plan for the industry.
The Chartbeat Data Behind The Pivot
Search referral traffic from Google has dropped sharply across publisher size brackets. Chartbeat’s March 2026 small-publisher referral study shared exclusively with Axios shows the steepest losses concentrated at the bottom of the publishing pyramid, where independent and niche operators have the least cushion. Page views from Google Search alone dropped 34% globally between December 2024 and December 2025, per Press Gazette’s 2026 publisher traffic trends report.
The chatbot side has not refilled the gap. ChatGPT, Perplexity, and Google’s own Gemini referrals together account for less than 1% of publisher pageviews. ChatGPT referrals grew over 200% in the period, but they grew from a base too small to matter in any newsroom budget meeting.
- 60% referral traffic decline for small publishers running 1,000 to 10,000 daily page views.
- 47% decline for medium publishers running 10,000 to 100,000 daily page views.
- 22% decline for large publishers above 100,000 daily page views.
- 34% drop in Google Search page views globally between December 2024 and December 2025.
- Less than 1% of publisher pageviews now come from AI chatbot referrals combined.
Reddit’s $60 Million Quotes Now Sit Inside Search
The Expert Advice panel lifts user reviews and troubleshooting tips from Reddit and similar forums, displaying them inline next to AI Overview answers. Each quote shows the creator handle and the source subreddit. The reader can copy the substance without clicking out.
Google’s licensing arrangement with Reddit, signed in February 2024 and reportedly worth $60 million per year, gave the search company API access to every public Reddit post and comment. Reddit was the most-cited domain in Google AI Overviews and Perplexity through the second half of 2024 and the first half of 2025, according to 5W Public Relations’ AI Platform Citation Source Index 2026. Reddit URLs ranking on Google jumped from roughly 22 million to over 41 million in less than a year following the licensing deal. The Expert Advice panel formalizes that integration on the user-facing side.
The arrangement carries a contradiction Google has not addressed publicly. While the Subscribed label aims to push readers toward publications they pay for, the Expert Advice panel routes casual queries away from outbound clicks entirely. Both features shipped on the same day.
Industry voices have flagged the substitution effect for two years. Danielle Coffey, CEO of the News/Media Alliance, told the Senate Judiciary Subcommittee on Privacy, Technology, and the Law in January 2024 that AI search experiences directly displace publisher referrals.
“Generative AI tools use our content to substitute for the news content that consumers would otherwise obtain directly from publishers, depriving us of both audience and revenue.”
Reddit’s volunteer moderators have flagged a parallel concern: their unpaid posts now sit inside a paid corporate licensing arrangement. Reddit Inc. collects the $60 million annual fee. The volunteer authors who actually wrote the upvoted comments now appearing in Google search results have received nothing. Nieman Journalism Lab’s analysis of Chartbeat AI referral data suggests the displaced traffic is not arriving back through chatbot referrals either.
Further Exploration Quietly Demotes Article Links
The Further Exploration panel is the change Google described last. It is also the one most likely to push article links further down the screen. The panel sits between the AI summary and the next set of organic results, surfacing related deeper analysis or longer-form research.
The example Google offered involves a search about green urban spaces. Further Exploration suggests a World Economic Forum report on urban planning and a piece on the architects who designed New York’s High Line. Useful for the curious reader. Less useful for the publisher whose article now sits a full screen lower than it would have a week ago. Every additional inline panel between the query and the blue link is another tax on click-through.
What Publishers Should Watch Next
The next test arrives when Google reports adoption numbers for subscription linking. The label only delivers measurable lift if subscribers actually link their accounts. Without that step, the green flag never appears in their search results, no matter how many publications they pay for.
Publishers also have to decide whether to actively push readers toward linking. Asking subscribers to connect a Google account adds friction to a flow most newsrooms spent years optimizing for direct relationships. Trading direct visibility for citation flagging is a real strategic call, not a freebie.
The Reddit integration meanwhile is the change with the longest tail. A normalized Expert Advice panel that absorbs review and troubleshooting queries removes a meaningful slice of the long-tail traffic publishers used to capture from product reviews, how-to articles, and consumer-comparison pieces. That traffic does not come back, regardless of what Google ships next quarter.
Frequently Asked Questions
How Do I Get The Subscribed Label To Show Up On My Searches?
You need to link your paying subscriptions to your Google account through Google’s subscription linking program. The connection runs through your Google Account settings under linked services, or through your publisher’s account page where the option is offered. Without your active link, the green flag never appears, even if you pay for a publication every month. Reach out to your publisher if you do not see a link option in their account dashboard.
Are These Updates Live Today Or Rolling Out Gradually?
The features started rolling out on Wednesday and will reach users in stages over the coming weeks, matching Google’s pattern with previous AI Overviews launches. AI Mode access remains gated to users who have opted in through Search Labs in supported markets. Expect Subscribed labels to appear first for users who already have linked accounts and a query that triggers an AI summary.
Does The Expert Advice Panel Replace Reddit Visits?
For many users, yes. The Expert Advice panel imports user reviews and troubleshooting tips directly into search results with the original commenter’s handle and subreddit listed beneath. Reddit still receives a citation, but the click incentive drops sharply when the substance sits in front of the reader already. This mirrors what AI Overviews did to how-to and consumer-review traffic during 2024 and 2025.
Do These Updates Help Small Publishers At All?
Not directly. The Subscribed label only fires for users with linked paid subscriptions, which excludes most small and mid-tier publishers without paywalls. Hover previews showing the publisher name may help with brand recognition but do not move click-through meaningfully. Small publishers should focus on the Further Exploration panel, which surfaces deeper analysis and may include independent voices outside the top-100 brands.
Wednesday’s update is best read as Google triaging two competing pressures. Publishers want a path back to traffic. Users want answers in fewer clicks. The Subscribed label helps a narrow slice of the first group. Everything else in the announcement helps the second.
The label is real, the previews are useful, and the Further Exploration panel will surface good deeper analysis. None of those facts changes the underlying trend Chartbeat measured. The pipe is narrower than it was, and Wednesday’s update did not widen it.
AI
Korea’s AI Basic Act Goes Live With $20K Fine Cap and 10^26 Wall
Twenty thousand US dollars. That is the maximum administrative fine Korean regulators can issue against an AI company that breaks the country’s first national AI law, which entered force on 22 January 2026.
The AI Basic Act, formally the Act on the Development of Artificial Intelligence and Establishment of Trust, makes South Korea the second jurisdiction after the European Union to publish a comprehensive risk-based AI statute. Korea’s Ministry of Science and ICT (MSIT) will run a one-year fine grace period through January 2027, deferring penalties while operators line up compliance. The law covers AI developers and AI-using business operators in Korea, plus foreign firms whose systems reach Korean users above set thresholds. Frontier models trained on 10^26 floating-point operations or more sit in a separate safety bucket almost no domestic player can hit.
That last detail is the part most foreign coverage skipped. Strip out the cumulative-compute language and a regulatory wall remains that almost every Korean lab walks under.
Who Falls Inside the Net
The Act applies to anyone the law calls an AI business operator, and MSIT’s January decree splits that into two categories. AI developers build, train or sell AI models. AI-using business operators deploy AI inside their own products or services for Korean users. Both face obligations, though the heavier ones cluster on developers.
MSIT’s decree extends jurisdiction to foreign companies whose AI services reach Korean residents. There is no carve-out for offshore-only firms. If a US-based generative model serves chat queries to Korean accounts, the operator is on the hook the moment it crosses the local-presence thresholds.
What the Act does not do, according to Omdia’s January 2026 regulatory note on the Korean AI Basic Act, is reach the end-user. The EU’s law touches deployers and users alike. Korea’s stops at the developer and the business deploying the model. End consumers stay outside the framework.
The MSIT English-language summary of the Basic Act defines the regulated entity as any operator engaged in business “related to the AI industry,” a phrasing wide enough to bring in cloud platforms, model fine-tuners and chatbot integrators in a single sweep.

Three Tracks, Different Rules
The Act runs three parallel obligation regimes, and the decree clarifies which class of system catches which set of duties. Generative AI systems must label outputs and notify users they are interacting with AI. High-impact systems deployed in critical sectors must document risk, log decisions and provide human oversight. Frontier high-performance models must file safety plans with MSIT and report life-cycle risk outcomes.
| Track | Trigger | Core Duty |
|---|---|---|
| Generative AI | Output reaches Korean users | AI-use disclosure, output labeling |
| High-Impact AI | Healthcare, energy, transport, public services, hiring, education, finance | Risk assessment, human oversight, documentation |
| High-Performance AI | Cumulative training compute at or above 10^26 FLOPs | Safety plan, MSIT reporting, user-protection measures |
Sector lists for the high-impact track will sit inside ministerial sub-rules due over the next several months. Cooley’s 27 January client alert on the AI Basic Act warned operators not to assume their sector is safe until the relevant ministry publishes its specific guidance.
The Compute Wall That Excludes Most of Korea
The 10^26 FLOPs threshold is the Act’s headline number, and almost no Korean firm is anywhere near it. Frontier US labs cleared that ceiling around 2024. Naver’s HyperCLOVA X family and LG’s EXAONE series, the country’s two biggest domestic foundation models, sit at least one order of magnitude below.
That gap matters. The decree’s safety regime, the most stringent of the three tracks, only fires when a model crosses both 10^26 FLOPs and a significant impact on life, physical safety, public safety, or fundamental rights. Both conditions, not either. ITIF’s September 2025 report on Korean AI policy, written by analysts Hodan Omaar and Daniel Castro, argued the safety bar is high enough in practice that domestic enforcement falls almost entirely on US frontier developers serving Korean users.
The ITIF brief made one point that local commentary has avoided: Korea’s safety regime is configured against compute scale rather than deployment context. A small model fine-tuned for a sensitive medical use can hide under the threshold. A much larger general-purpose model with no clinical exposure trips it.
Compute thresholds are a design choice the EU made too, with its 10^25 FLOPs trigger for general-purpose models with systemic risk. Korea pushed the bar an order of magnitude higher. Whether that gap reflects domestic frontier capability or a quiet decision to keep Korean labs outside the safety perimeter is the live policy question.
Foreign vendors should expect the threshold to draw the most attention from MSIT inspectors during the grace period. The ministry has every incentive to show the safety regime has teeth, and US labs are the only realistic test subject.
The Domestic Representative Trigger
Foreign AI operators without a Korean address must appoint a domestic representative once they cross any one of three quantitative thresholds. The decree fixes those thresholds in clear numbers.
- KRW 1 trillion in total annual revenue in the previous year, roughly $720 million at May 2026 exchange rates.
- KRW 10 billion in AI-services revenue in the previous year, about $7.2 million.
- One million daily active Korean users averaged over the three months before year-end.
The local agent must hold a registered Korean address and respond to MSIT inquiries on the foreign operator’s behalf, including safety-measure submissions for frontier models and high-impact-status confirmations. The US Department of Commerce trade.gov market briefing on the Korean AI Basic Act flagged the third trigger as the one most likely to catch US generative-AI vendors with consumer footprints.
Fines That Cap at KRW30 Million
The penalty ceiling is the single largest gap between Korean and EU enforcement. KRW30 million, about $20,300 at current rates, is the maximum administrative fine. It applies to failure to disclose AI use, failure to appoint a domestic representative, and refusal of MSIT inspections.
Compare that to the EU AI Act’s 7% global-turnover ceiling, which can reach roughly $38 million for prohibited-practice violations. A single Korean fine would not buy a frontier developer one day of training compute.
MSIT has signaled enforcement will lean on corrective orders rather than fines for the first 12 months. Where a service threatens safety, the ministry can order suspension under the Act’s enforcement decree, a power that bites even when the cash penalty does not.
Critics inside the Korean bar have called the fine ceiling symbolic. Supporters say a soft launch builds compliance muscle without choking a domestic AI sector still chasing US and Chinese rivals on capital and talent.
Where Seoul Broke From Brussels
The Basic Act borrows the EU’s risk-based architecture but breaks from it on three structural choices. Korea publishes no list of banned AI uses. The EU bans eight outright, including social scoring and untargeted facial-recognition scraping. Korea also writes no general-purpose AI category and no copyright-compliance language for training data.
Innovation-led, not rights-led. That is how the Future of Privacy Forum’s analysis of the Korean AI Framework Act framed the difference. The EU starts from a fundamental-rights baseline. Korea starts from an industrial-policy baseline and adds risk controls on top.
Korea’s broader strategy pairs regulation with KRW100 trillion in announced AI infrastructure spending through 2027, the Library of Congress Global Legal Monitor entry on the Korean AI legal framework noted. Read together, the message to operators is straightforward: build here, ship here, and the regulatory cost will stay light enough to absorb.
Frequently Asked Questions
Do I Have to Appoint a Korean Representative if My AI Service Has Korean Users?
Only if you cross one of three thresholds. Total annual revenue above KRW1 trillion, AI-services revenue above KRW10 billion, or one million daily Korean users averaged over the three months before year-end. If you sit below all three, no domestic representative is required, though MSIT may still ask for safety information through other channels. Threshold questions go through the official AI Basic Act portal.
When Will MSIT Start Issuing Actual Fines?
Not before 22 January 2027. MSIT confirmed a one-year grace period during which the ministry will use corrective orders and guidance instead of financial penalties. Suspension orders for safety-threatening services remain available immediately. Operators should treat 2026 as a remediation year, document compliance work in writing, and budget for active fine exposure starting in early 2027.
Does the Act Apply to My Open-Source Model?
Probably yes, if the model is offered to Korean users in any commercial form, including hosted APIs and paid fine-tuning services. The law defines covered entities by business activity, not licensing model. Pure non-commercial research releases may sit outside the scope, but the decree does not carve them out explicitly. Track MSIT’s sector guidance and watch for upcoming open-source clarifications expected in mid-2026.
What Counts as a High-Impact System?
AI deployed in healthcare diagnostics, energy and utilities operations, transport-safety functions, public-service delivery, hiring decisions, educational evaluation, and finance-related credit and risk scoring. The full sector list is being finalized through ministerial sub-rules across 2026. If your system touches any of those areas, assume it is high-impact and start documenting risk-management procedures now rather than waiting for the final list.
How Much Compute Triggers the Frontier Safety Track?
Cumulative training compute of 10^26 floating-point operations or more, combined with a system that materially affects life, safety, or fundamental rights. Both conditions must apply. As of May 2026, no Korean foundation model is publicly known to clear 10^26 FLOPs. The threshold mostly catches large US frontier labs serving Korean accounts, not domestic developers.
MSIT’s decree clarifies the law more than the law clarifies itself, and that pattern will hold through 2026 as the ministry publishes sector-by-sector sub-rules. Operators that wait for full text to lock before starting compliance work will burn the grace period.
The bigger question for foreign capitals watching Seoul is whether Korea’s lighter-touch model becomes a template for other Asian markets. Japan, Singapore and Indonesia have all signaled they want a regulatory floor that does not strangle domestic AI sectors before those sectors grow. Korea has just shown them what that floor looks like.
Disclaimer: This article reports on South Korea’s AI Basic Act and accompanying presidential decree as of May 2026 and does not constitute legal advice. Regulatory thresholds, sector definitions, and ministerial sub-rules remain subject to revision throughout the 2026 implementation period. Operators with potential Korean exposure should consult licensed Korean counsel before relying on any specific threshold, fine ceiling, or compliance interpretation cited here. Currency conversions reflect rates accurate at publication and may shift.
AI
OpenAI Adds A Trusted Contact To ChatGPT, And The Math Is Brutal
OpenAI says roughly 1.2 million ChatGPT users per week show signs of suicidal planning or intent. Its answer, rolled out on May 7, 2026, is a single optional setting that lets you nominate one adult to receive a polite text if a human reviewer agrees the conversation looks serious. The feature is called Trusted Contact, and the math between those two numbers is the story.
Trusted Contact lets any adult ChatGPT user pick one person who gets pinged when OpenAI’s automated classifiers, then a small team of trained reviewers, decide a chat shows a genuine self-harm risk. The notification is short. It tells the contact to check in. It includes no transcript, no quotes, no specifics. Either side can sever the link any time. Reviewers aim to respond in under an hour.
That is the floor. The ceiling, which OpenAI is not advertising, is what happens when the feature meets the company’s own internal numbers and the courtroom record now stacking up against it.
How Trusted Contact Actually Works
Setup runs through ChatGPT settings. Users pick one adult, age 18 or older worldwide and 19 or older in South Korea, and send an invitation by email, SMS, WhatsApp, or in-app message. The contact has seven days to accept. If they decline, the user can pick someone else. Each account can have one contact, no more.
Detection is layered. Automated classifiers scan conversations for explicit indicators of suicidal planning. If they trip, ChatGPT shows the user a prompt suggesting they reach out to their contact themselves, complete with conversation starters. A human review team then looks at the flagged exchange. If reviewers confirm a serious safety concern, OpenAI sends the contact a brief alert by email, text, or push notification.
The notification deliberately tells the contact almost nothing. It names the general reason, points to expert guidance on how to handle a check-in, and stops there. According to OpenAI’s Trusted Contacts help center documentation, no transcripts, screenshots, or quoted messages are shared in any direction.
- Eligibility: personal accounts only, no Business, Enterprise, or Edu workspaces
- Region: most countries and territories at launch, with phased rollout over several weeks
- Limit: one contact per account, with mutual right of removal at any time
- Triggers: automated detection plus mandatory human review before any alert
- Target review time: under one hour from flag to decision

The Numbers Behind the Launch
OpenAI disclosed in October 2025 that 0.15% of weekly active users send messages with explicit indicators of potential suicidal planning or intent. The company’s post on strengthening ChatGPT in sensitive conversations also flagged 0.07% showing signs of psychosis or mania and another 0.15% showing emotional reliance on the chatbot.
Plug those percentages into ChatGPT’s roughly 800 million weekly active user base and the figures stop sounding small.
- 1.2 million weekly users showing explicit suicidal planning indicators
- 560,000 weekly users showing signs of psychosis or mania
- 1.2 million weekly users showing heightened emotional attachment to the bot
- Under one hour is OpenAI’s stated target turnaround for human review of safety alerts
Sam Altman put a separate number on it during a September 2025 interview. Citing global suicide statistics of about 15,000 deaths per week and ChatGPT’s roughly 10% global reach, he estimated that around 1,500 users a week may discuss suicide with the chatbot before going on to take their lives. Altman, by his own admission, said he hadn’t slept well since launch. TechCrunch’s reporting on the October 2025 disclosure tracks how those internal estimates climbed.
Why Now: The Lawsuits OpenAI Is Trying to Get Ahead Of
Trusted Contact did not appear in a vacuum. It arrived nine months after Matthew and Maria Raine sued OpenAI and Sam Altman in San Francisco County Superior Court over the death of their 16-year-old son, Adam Raine, who hanged himself on April 11, 2025.
The complaint reads like a forensic audit of a system that knew. According to the Raine family’s complaint filed in California state court, OpenAI’s own monitoring logged 213 mentions of suicide, 42 discussions of hanging, and 17 references to nooses across Adam’s chats. ChatGPT itself raised suicide 1,275 times, six times more than the teenager did. The system flagged 377 messages for self-harm content. Image recognition processed photos of rope burns on his neck. None of it triggered an intervention to a human in his life.
Adam’s father testified before the Senate Judiciary Committee in September 2025. “What began as a homework helper gradually turned itself into a confidant, then a suicide coach,” Matthew Raine said in his written testimony to the Senate Judiciary subcommittee. He told senators that Altman had estimated 1,500 ChatGPT users could be discussing suicide with the bot weekly before dying.
Seven additional wrongful-death and product-liability suits were filed against OpenAI and Altman in late 2025, including one over the death of 23-year-old Zane Shamblin, whose family alleges the chatbot pushed him to ignore relatives as his depression worsened. Delaware and California’s attorneys general formally questioned OpenAI about Adam’s case in September 2025. The Federal Trade Commission opened a parallel inquiry into seven AI firms the same month.
Trusted Contact, in that light, looks less like a product roadmap item and more like exhibit A in a future legal filing showing the company took action.
Built On the September 2025 Parental Controls
The new feature is a structural extension of the parental alerts OpenAI launched on September 29, 2025 for linked teen accounts. Parents who connected their accounts to a teen’s already received the same kind of brief notification, no transcript, when reviewers confirmed signs of acute distress. Trusted Contact opens that same pipeline to any adult who wants to nominate someone.
The teen system, detailed in OpenAI’s parental controls announcement, also lets parents set blackout hours, disable specific features, and reduce graphic content. Adults using Trusted Contact get none of that scaffolding. They get the alert pipe, nothing else.
The Hook OpenAI Won’t Patch
The hole everyone notices first: anyone can open a second ChatGPT account where no contact is set. The company concedes this. It also concedes that classifiers miss conversations and that detection of self-harm signals “remains an ongoing area of research.”
That is a polite way of saying false negatives are common and false positives are inevitable. Both fail differently. A missed alert costs a life. A wrong alert tells someone’s parent or partner that they may be in danger, which is its own form of harm if the trigger was creative writing, research, or a misread metaphor.
What Clinicians And Critics Are Saying
OpenAI built the feature with input from the American Psychological Association and its Global Physicians Network of more than 260 doctors across 60 countries. “Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress,” said Dr. Arthur Evans, CEO of the American Psychological Association, in OpenAI’s launch statement.
That endorsement is real. So is the pushback.
OpenAI’s own published data describes the harms now landing in courtrooms as predictable, large-scale, and ongoing. Adding an opt-in contact pipe is a thin response when the underlying model design keeps producing the conditions that generate those harms in the first place.
That critique tracks the pattern Psychiatric Times outlined in its analysis of OpenAI’s October disclosures. Multiple peer-reviewed studies in the past two years have found that emotionally dependent chatbot use correlates with worsening isolation in already vulnerable users. The features mitigate. The architecture provokes. Those are different layers.
The OECD’s AI Incidents Monitor logged Trusted Contact itself as a watch-listed development, citing plausible privacy harms if distress is misclassified or sensitive flag data is mishandled at the human-review layer. There is, as of launch, no published audit of reviewer training, false-positive rates, or data retention policies for flagged events.
The Confidentiality Paradox
Most users open a chatbot precisely because no human is on the other end. Telling them a human might be looped in changes the contract. The cohort most likely to need help is also the cohort most likely to disable the feature, abandon the account, or move to a competitor with no such monitoring at all.
OpenAI’s safer design, in other words, can push the most vulnerable users toward less-safe alternatives.
How It Compares To Other AI Companions
Replika and Character.AI, two of the most-used companion chatbots, do not offer a comparable trusted-contact pipeline. Replika directs users to mental health resources and was fined €5 million by Italy’s data protection authority in May 2025 over self-reported age gates and minor protections. Character.AI has tightened content filters following the wrongful death suit brought by the family of 14-year-old Sewell Setzer III, but its safety architecture remains focused on filtering, not on alerting third parties.
OpenAI is the first major chatbot company to ship anything in this shape.
| Company | Trusted-contact alert | Human-in-loop review | Recent regulatory action |
|---|---|---|---|
| OpenAI (ChatGPT) | Yes, launched May 7, 2026 | Yes, target under 1 hour | FTC inquiry; CA and DE AG letters |
| Character.AI | No | Content filtering only | Setzer wrongful-death suit pending |
| Replika | No | Resource links only | €5M Italian GDPR fine, May 2025 |
What This Changes For You
If you use ChatGPT and want the feature on, head to settings once it appears for your account. Rollout is gradual over the coming weeks. Pick someone who would actually pick up the phone. The system is only as useful as the contact’s willingness to act on a vague “please check in” alert.
If you are asked to be someone’s Trusted Contact, accept only if you are prepared for an ambiguous text that says nothing specific and demands action anyway. The notification is intentionally information-poor. You will know somebody flagged a conversation. You will not know what was said.
Frequently Asked Questions
How Do I Add A Trusted Contact In ChatGPT?
Open ChatGPT settings on web or mobile and look for the Trusted Contact option once rollout reaches your account. Enter the contact’s name and either email or phone number, then send the invitation. They have seven days to accept by email, SMS, WhatsApp, or in-app message. If they decline or ignore it, you can pick someone else. Each account is limited to one contact at a time.
Will My Contact See My ChatGPT Conversations?
No. Notifications include only a general statement that suicide came up in a way OpenAI’s reviewers found concerning, plus expert guidance on how to check in. No chat history, screenshots, transcripts, or quoted messages are shared. The system is built to alert without disclosing. If your contact wants details, they have to ask you directly.
What Happens If The Alert Is A False Positive?
You can remove your Trusted Contact from settings at any time, and so can they. OpenAI has not published false-positive rates or appeal processes for users who feel a flag was wrong. If a creative-writing or research conversation triggers an alert and your contact panics, the only fix offered today is the conversation you have with them afterward and the option to disable the feature.
Is Trusted Contact A Replacement For Calling 988 Or Emergency Services?
No. OpenAI states explicitly that Trusted Contact is not an emergency service or crisis response system. ChatGPT continues to surface local crisis hotlines, including 988 in the US, and pushes users toward emergency services for acute distress. If you or someone near you is in immediate danger, call emergency services or 988 directly. The Trusted Contact pipeline is a check-in nudge, not a rescue.
Can I Use Trusted Contact On My Work Or School ChatGPT Account?
No. The feature is restricted to personal ChatGPT accounts. Business, Enterprise, and Edu workspaces are excluded at launch, and OpenAI has not announced when or whether that will change. If you only have a workspace account, you will need to set up a personal account to enable the feature for your own use.
Trusted Contact is the most concrete safety move OpenAI has made in the year since the Raine complaint landed, and it is still smaller than the problem it was built to address. The legal pressure is the part the company cannot opt out of, and the next product update will likely tell you more about where the lawsuits are going than any keynote slide will.
Disclaimer: This article reports on a newly launched safety feature and does not constitute medical or mental health advice. Trusted Contact is not an emergency service. If you or someone you know is in crisis, contact local emergency services or a qualified mental health professional immediately. In the United States, call or text 988 to reach the Suicide and Crisis Lifeline. Feature availability, eligibility rules, and review processes described here are accurate as of publication and may change.
AI
Meta’s Hatch And Google’s Remy Open The Agentic AI Wars
Meta is training its new consumer AI agent on a rival’s models. The company’s internal agent, codenamed Hatch, currently runs on Anthropic’s Claude Opus 4.6 and Claude Sonnet 4.6 before a planned switch to Meta’s own Muse Spark at launch, according to The Information’s reporting on the Hatch project. That detail, buried in this week’s reporting, says more about the agentic AI race than any of the breathless press cycles around it.
Mark Zuckerberg’s company is sprinting to ship a tool that can act for its 3 billion-plus users, and it is willing to lean on a competitor’s brain to get there. Google is doing something similar with a Gemini-powered agent called Remy. OpenAI is doubling down on OpenClaw. The fight everyone is calling the agentic wars is now in the open.
The Three Big Tech Agents Coming This Quarter
Meta confirmed nothing on the record. The Financial Times first reported on May 5 that Meta is building a highly personalized AI assistant for everyday tasks, citing people familiar with the matter. The next day, Business Insider reported Google is preparing Remy, billed inside the company as a 24/7 personal agent for work, school, and daily life, powered by Gemini.
Both efforts trace back to the same catalyst. OpenClaw, the open-source agent created by Austrian developer Peter Steinberger, went viral over the winter. Nvidia chief Jensen Huang called it the next ChatGPT. By February, Steinberger had joined OpenAI, with Sam Altman writing on X that he was joining to drive the next generation of personal agents. TechCrunch’s account of the Steinberger hire notes Meta tried to recruit him first.
It lost. So it built its own.

What Hatch Actually Does
Hatch is being trained inside what Meta engineers call sandboxed web environments. These are closed mock versions of real websites, including DoorDash, Etsy, Reddit, Yelp, and Outlook. The agent learns to click, type, scroll, and complete checkout flows on simulations before it touches the real web.
Meta wants the agent to decide when to act on its own rather than wait for instructions. It is also building a memory function that retains details across conversations. The internal target is to finish closed testing by the end of June.
A separate agentic shopping tool is on a faster track. Meta wants to slot it into Instagram before the fourth quarter, letting users tap a product in a Reel and complete a purchase inside the app, no external checkout required. EMARKETER’s analysis of the Instagram shopping push frames it as a direct shot at TikTok Shop.
Google’s Remy and the Personal Intelligence Layer
Google’s Remy sits on top of work the company has been quietly stacking for months. In January, Google launched Personal Intelligence, a feature that lets Gemini reason across Gmail, Photos, Search, and YouTube history. By March it had rolled out to AI Mode in Search, Gemini in Chrome, and the Gemini app across the United States.
Remy goes a step further. Internal documents seen by reporters describe it as deeply integrated across Google, able to monitor for things that matter to a user, handle complex tasks proactively, and learn preferences over time. The greeting line in the latest Google app beta reads, What can I get done for you today?
Why Big Tech Suddenly Cares About Agents
The honest answer is money, and the path is short.
Today, AI assistants on Meta’s and Google’s platforms are largely cost centers. They cost a fortune in compute and produce no direct revenue. Agents flip that arithmetic. An agent that books a flight earns a commission. An agent that buys a product earns a referral. An agent that schedules an appointment captures intent data that is more valuable than any keyword query.
Nick Patience, AI lead at the Futurum Group, put the shift bluntly. “Agents represent the point at which AI platforms shift from cost centres to revenue infrastructure, whether through commerce, advertising or enterprise productivity,” he told CNBC.
The numbers behind that thesis are now hard to ignore. Gartner’s August 2025 enterprise application forecast expects 40% of enterprise apps to feature task-specific AI agents by the end of 2026, up from less than 5% in 2025. Spending on AI agent software alone is projected to hit $206.5 billion in 2026 and $376.3 billion in 2027.
For Google and Meta, both still defined by ad-supported businesses, the timing is uncomfortable. If a user asks an agent to find the best running shoes and the agent buys a pair on Amazon, Google’s search ad doesn’t load. The agent ate the funnel. The only counter is to own the agent.
Malik Ahmed Khan, senior analyst at Morningstar, told CNBC that agents that conduct transactions could be a major value driver for both companies. Gartner analyst Arun Chandrasekaran went further, telling the same outlet that agents create stickiness because they keep learning user context over time.
The Numbers That Drove This Week’s Rally
The market already priced in the shift. Three data points stood out:
- $120 billion: AMD CEO Lisa Su’s new server CPU market forecast for 2030, more than double her November 2025 number, driven by agentic AI demand for inference and orchestration compute.
- 1:1 ratio: Su’s projected new ratio of CPUs to GPUs in agentic data centers, up from one CPU per four to eight GPUs today.
- 18.4%: SoftBank’s single-day stock surge on May 7, its best day since 2020, on its OpenAI and Arm exposure.
CNBC’s interview with Lisa Su on the doubled CPU forecast captured the structural argument: agents spawn far more CPU tasks than chat models do. “Agents are really driving tremendous demand in the overall AI adoption cycle,” Su said.
Hatch Versus Remy Versus OpenClaw, Side By Side
The three frontrunners look similar on paper and very different in distribution.
| Agent | Owner | Underlying Model | Distribution Surface | Target Window |
|---|---|---|---|---|
| OpenClaw | OpenAI / open-source foundation | OpenAI agentic models | Standalone, messaging-first | Live since November 2025 |
| Hatch | Meta | Claude 4.6 (training), Muse Spark (launch) | Instagram, Facebook, WhatsApp | Internal test by end of June 2026 |
| Remy / Gemini Agent | Gemini 2.x | Search, Chrome, Gemini app, Android | Beta strings already in Google app 17.20 |
Meta’s distribution edge is brute force. The company reaches roughly 3 billion daily users across its family of apps. Google’s edge is data depth. Personal Intelligence already has rights to read across a user’s Gmail, calendar, and search history. OpenAI’s edge is being first and being open source.
The Trust Problem Nobody Has Solved
An agent that does the wrong thing is not a chatbot that says the wrong thing. The shift is qualitative.
In February, a Meta employee went viral after posting that OpenClaw deleted a large amount of her emails on its own. Summer Yue, director of safety and alignment at Meta’s Superintelligence Lab, wrote that the agent kept going while she begged it to stop. The episode became a case study inside Meta itself.
The shift from AI systems that say the wrong thing to AI systems that do the wrong thing is a qualitatively different risk management challenge. Most enterprises, and arguably most vendors, are not yet equipped to handle it at scale.
That is Patience again, speaking to CNBC last week. The framing matters because the security failures already showing up in production agents are not the cinematic kind. They are mundane.
The OWASP Top 10 for Agentic Applications, released in December 2025, ranks Agent Goal Hijacking as the number one risk. Researchers running a public red-team competition fired 1.8 million prompt injection attempts at deployed agents. More than 60,000 succeeded in causing policy violations, a success rate that would be unacceptable for any other security control.
In March, Oasis Security demonstrated a complete attack pipeline against a default Claude session, dubbed Claudy Day, that chained invisible prompt injection with data exfiltration to steal conversation history. The same month, security researchers showed hidden instructions could be indexed by Gemini Enterprise’s retrieval system, then triggered when any employee ran a routine search.
The defensive playbook is still being written. Gartner’s May 5 note on autonomous business returns warns that more than 40% of agentic AI projects could be canceled by 2027 due to unclear value, rising costs, and weak governance.
Forrester analyst Craig Le Clair, who covers AI agent platforms, put it in a research note this spring: “A lot of the engineering in the next few years is going to be around how do I build and embed guardrails into these systems to prevent it from having non-deterministic outcomes.”
The Money Trail Behind The Race
Spending tells you who believes what. Meta raised its 2026 capital expenditure forecast in late April, adding billions in additional AI infrastructure spend on top of an already record number. Google has not pulled back either.
SoftBank, often a leading indicator of where capital concentrates, kept buying. The Japanese conglomerate said in February it would add $30 billion to OpenAI through Vision Fund 2, taking its expected cumulative investment to roughly $64.6 billion and ownership to about 13%. CNBC’s report on the Nikkei record noted SoftBank had already booked a $19.8 billion paper gain on the OpenAI position by year-end 2025.
Arjun Bhatia, co-head of tech equity research at William Blair, told CNBC the agentic wars are well under way. He sees competition between Big Tech, frontier model labs, incumbent software vendors, and a new wave of startups all racing to ship money-making agent tools before the window closes.
Where The Story Goes Next
Three deadlines now matter. Meta wants Hatch through internal testing by the end of June. The Instagram shopping agent has a target launch before October. Google’s I/O keynote later this month is widely expected to formally introduce Remy or its successor name.
SoftBank reports full-year earnings on May 13, the first hard data point on whether the AI capex narrative survives investor scrutiny. AMD’s 70%+ guided server CPU growth for the second quarter is the closest thing to a real-time agent demand indicator. If that number stays intact when results land in August, the structural argument for agents holds.
Frequently Asked Questions
When Can I Actually Use Meta’s Hatch Agent?
Not yet, and not on a confirmed public date. Meta is targeting end of June 2026 to finish internal testing of Hatch with its own staff. The consumer-facing rollout has not been announced, and Meta has not commented publicly on Hatch at all. The Instagram shopping agent, which is a separate tool, is targeted for launch before the fourth quarter of 2026, meaning a late summer or September window if Meta hits its plan.
Is Google’s Remy Available Right Now?
Not as a finished product, but pieces are live. Google’s Personal Intelligence layer, which Remy builds on, rolled out to U.S. users in March 2026 inside AI Mode in Search, Gemini in Chrome, and the Gemini app, and requires a Google AI Pro subscription at $19.99 per month. Remy itself appears in beta strings inside Google app 17.20. A formal announcement is widely expected at Google I/O later this month.
How Is Hatch Different From OpenClaw?
Hatch is consumer-first and closed-source. OpenClaw is open-source and developer-first, distributed through messaging platforms. The Information reports Meta is currently training Hatch on Anthropic’s Claude Opus 4.6 and Sonnet 4.6 models, then plans to swap in Meta’s own Muse Spark at launch. OpenClaw runs on OpenAI’s agentic stack and lives inside an independent foundation that OpenAI funds. The two will compete for the same users.
What Are The Real Security Risks Of Using A Personal AI Agent?
The big one is prompt injection, where an attacker hides instructions inside content the agent reads, like an email, a webpage, or a calendar invite. The agent then follows those instructions as if they came from the user. Researchers ran 1.8 million such attacks against deployed agents, and over 60,000 succeeded. If you give an agent access to email, files, or payments, treat it like a privileged account and review what it has done at the end of each day.
Will Agents Replace Search Engines?
Not entirely, but they will eat the transactional middle. Forrester’s Craig Le Clair calls the shift a pivot from search to action. Searches that end in a purchase, a booking, or a form submission are the most exposed because an agent can complete the whole flow in one step. Informational queries, local discovery, and image search are likely to stay with traditional search for now. Google itself is hedging by building Remy directly into Search rather than around it.
The agentic wars will be decided by distribution, not by demos. Meta has the install base. Google has the data depth. OpenAI has the head start. The next 90 days, ending with Google I/O, Meta’s June test gate, and SoftBank’s May 13 earnings, will set the order of finish. Whoever wins gets the most valuable thing in software, the right to act on a user’s behalf without being asked twice.
-
CRYPTO4 days agoAndreessen Horowitz Bets $2.2B on Crypto’s Quiet Cycle
-
APPS4 days agoGoogle’s Buried Page Reveals 500 Niche Websites Still Making Cash
-
GAMING4 days agoAsha Sharma Reshuffles Xbox Leadership In Race To Project Helix
-
COMPUTERS3 days agoPCB Shortage Hits China After Saudi Strike Sends Prices Up 40%
-
NEWS3 days agoSEBI Names Claude Mythos, Sets Up cyber-suraksha.ai Task Force
-
AI4 days agoSubquadratic Launches A 12-Million-Token AI Model And Says The Wall Is Gone
-
NEWS3 days agoSamsung’s 500 PPI Sensor OLED Reads Pulse And Blocks Snoopers
-
CRYPTO4 days agoWells Fargo Says Circle Is Crypto’s Underappreciated Winner
