May 2025 Tech Newsletter
There has been a lot of movement in Washington to push forward new rules on AI and online harms. U.S. courts are also moving closer to punishing Big Tech monopolies. And overseas, European regulators are cracking down on data privacy and digital platforms, and even tech giants are voluntarily offering concessions to avoid penalties.
I’ll be recapping the most important developments in technology law and policy from May 1 to May 31st, 2025, each followed by why it matters- in chronological order.
1. EU Fines TikTok €530 Million Over Data Transfers to China (May 2nd)
What happened: European privacy regulators just hit TikTok with one of its largest fines to date for mishandling user data. On May 2, Ireland’s Data Protection Commission (DPC), TikTok’s lead EU regulator, announced a €530 million (about $600 million) fine against TikTok. This action comes after a probe found TikTok failed to ensure that European users’ personal data was adequately protected from access by TikTok staff in China. In fact, TikTok was ordered to suspend any data transfers to China within six months unless it brings its operations into compliance with EU data protection laws. TikTok has stated it will appeal the ruling. It asserted that it has been following EU rules (such as using standard contractual clauses for data transfers) and that it has never provided European data to the Chinese government. The company also pointed to its ongoing “Project Clover” initiative to store European user data in local data centers, arguing the decision doesn’t fully account for those measures. This fine comes on top of a €345 million fine TikTok received in 2023 over mishandling children’s data.
Why does this matter? (WDTM?): The Data Protection Commission (DPC), TikTok’s lead EU regulator, is bringing increasing scrutiny on TikTok. It has been only two years since TikTok was fined €345 million by the same agency for mishandling children’s data. It essentially underscores Europe’s serious stance on data privacy and data protection. The order to halt data flows to China is particularly significant: it reflects mounting Western concern about Chinese national security laws that could compel companies like ByteDance to hand over data to Beijing. If TikTok cannot satisfy EU regulators that Europeans’ data is beyond Chinese government reach, it may face a de facto ban on data transfers, which would be a major operational and technical challenge. The hefty €530 million fine also serves as a warning to other tech firms (especially those with links to jurisdictions seen as high-risk) that violations of EU General Data Protection Regulation (GDPR) can be costly and carry business-altering consequences.
Globally, this move could deepen regulatory fragmentation: as the EU pushes data localization and stringent privacy compliance, other jurisdictions might follow suit, which would force multinational tech companies to adopt these given European data practices everywhere. Finally, the TikTok penalty may further fuel geopolitical tech tensions; it is coming at a time when the U.S. and its allies are also evaluating or restricting TikTok on security grounds, it adds pressure on TikTok to increase transparency and oversight of its data practices to avoid a broader backlash.
2. California Privacy Agency Issues First Fine Under CPRA (May 6th)
In a milestone for U.S. data privacy enforcement, California’s new Privacy Protection Agency (CPPA) penalized Todd Snyder, a national retail company, for violating the California Consumer Privacy Act (CCPA). The CPPA’s investigation found the clothing retailer failed to honor consumers’ opt-out requests, collected more personal data than necessary, and imposed excessive ID verification hurdles. Regulators ordered the company to overhaul its privacy practices and levied a fine of about $345,000. This is one of the first major actions by the CPPA, which only recently gained enforcement powers. The fine signals that companies doing business in California must strictly comply with data privacy requirements or face sanctions.
WDTM?: This decision marks the evolution of privacy law enforcement in the U.S., even as no broad federal privacy law exists. California’s aggressive stance sets a de facto national standard, given the state’s economic reach. Legally, it demonstrates that new state privacy laws have real teeth: businesses can be held accountable for how they handle consumer data. This enforcement likely foreshadows greater scrutiny of tech and retail firms’ data practices, influencing corporate compliance programs. This is a definitive win for privacy rights, and it shows that the promises of laws like CCPA/CPRA are being actively upheld to protect personal information.
3. Global Crackdown Dismantles DDoS-for-Hire Networks (May 7th)
What happened: An international law enforcement operation in early May took down several major “booter” or DDoS-for-hire services that had been flooding the internet with attacks. On May 7, the U.S. Department of Justice announced the seizure of 9 domains associated with some of the world’s most popular DDoS-for-hire platforms. In parallel, police in Poland (with Europol’s support) arrested four individuals accused of administering these sites. The targeted services had allowed paying customers to launch overwhelming distributed denial-of-service attacks against websites and online platforms under the guise of “stress testing.” Officials said these platforms were responsible for hundreds of thousands of attacks worldwide, disrupting schools, government agencies, gaming services, and millions of users. This coordinated sweep, part of the long-running “Operation PowerOFF,” builds on earlier efforts to shutter illegal DDoS marketplaces.
WDTM?: The takedown strikes at the heart of cybercrime-as-a-service. For years, small-time hackers and even malicious actors with minimal skills could rent time on these booter services to knock targets offline, which could cause chaos for businesses and individuals alike. No single country could have shut these services down as efficiently and swiftly as this international enforcement operation did. Since the days of my senior thesis (Spring 2023), I have seen greatly improved coordinated responses to global cybercrime, and this headline is just evidence of that.
4. Two-Decade-Old Botnet Taken Down by U.S. and Allies (May 9th)
What happened: U.S. prosecutors unsealed indictments against four foreign hackers accused of operating a massive botnet for illicit profit since the mid-2000s. The network, known by names like “Anyproxy” and “5socks,” had infected thousands of unwitting individuals’ home and office routers worldwide with malware, turning them into proxy servers for rent. For over 20 years, the defendants allegedly sold access to these compromised devices (which advertised more than 7,000 proxies at one point), thereby allowing paying clients to route their traffic through victims’ routers to mask criminal activities. U.S. and international authorities coordinated to seize the botnet’s domain names and infrastructure, effectively disabling the network in the same sweep. The accused administrators, hailing from Russia and Kazakhstan, are charged with conspiracy, computer fraud, and identity theft, among other offenses.
WDTM?: This bust dismantled one of the longer-running, profitable underground services that had quietly undermined cybersecurity for years. By covertly conscripting everyday people’s routers, the botnet not only violated users’ privacy and security but also enabled a marketplace for further crimes (from spam campaigns to more sophisticated intrusions). Its takedown demonstrates law enforcement’s increasing ability to penetrate and prosecute complex cybercriminal enterprises- even those operating across borders and years. It also is a powerful reminder for the public and companies to update and secure their network devices: the longevity of this scheme was partly due to many routers remaining unpatched and vulnerable. In short, a major vector for cyber abuse has been eliminated, likely making the internet incrementally safer (and telegraphing to other botnet operators that they are not beyond the law’s reach).
5. Senate AI Working Group Unveils Bipartisan Regulation Roadmap (May 15th)
What happened: On May 15, leaders of the U.S. Senate’s Bipartisan AI Working Group—including Majority Leader Chuck Schumer and Senators Mike Rounds, Martin Heinrich, and Todd Young—released a long-awaited “Driving U.S. Innovation in Artificial Intelligence” policy roadmap. This agenda-setting document distills months of expert forums into a framework for potential AI legislation. It highlights eight key issue areas Congress should address: supporting U.S. AI innovation, managing AI’s impact on the workforce, high-impact AI use-cases, protecting elections and democracy, privacy and liability, ensuring transparency and copyright protection, guarding against AI risks, and safeguarding national security. The roadmap also calls for substantial federal investment (on the order of $32 billion annually) in AI research and education, new AI “grand challenges,” and a national data privacy law to accompany AI development. While not a bill itself, the blueprint reflects areas of bipartisan consensus and is expected to guide multiple committees as they draft AI rules.
WDTM?: This marks the most concrete step yet by Congress toward comprehensive AI legislation. The fact that a bipartisan group of senior senators produced a unified framework signals growing political will to establish guardrails for AI. The roadmap’s breadth—from civil liberties (deepfakes, transparency, copyright) to national security—shows lawmakers are grappling with AI’s wide-ranging societal implications. It also emphasizes “responsible enablement” of AI innovation alongside risk mitigation, suggesting that any forthcoming regulations will try to foster AI advancements and protect the public. For a tech-savvy public, the roadmap offers an early look at the rules that could shape AI use in the U.S., such as transparency requirements for AI systems or liability for harmful outcomes. While it remains to be seen how quickly Congress can translate these ideas into law, this initiative lays the groundwork for the first-ever U.S. AI regulatory regime.
6. Microsoft Offers to Unbundle Teams to Settle EU Antitrust Probe (May 16th)
What happened: Microsoft moved to defuse a long-running EU antitrust investigation by proposing a significant change to its Office software bundling. On May 16, the European Commission revealed that Microsoft has offered to sell its popular Office 365 suite without the Teams collaboration app at a lower price, as an option for customers. This concession comes in response to a 2020 complaint by Slack (now part of Salesforce) that Microsoft’s tying of Teams with Office unfairly stifled competition in the workplace chat market. Under Microsoft’s proposal, European business customers would be able to purchase Office or Microsoft 365 subscriptions without Teams included, for a price up to €8 cheaper than the bundle with Teams. Additionally, Microsoft offered measures to boost interoperability for rivals; it pledged it will let third-party communications apps integrate with Office, and to improve data portability (for example, allowing users to easily export their Teams chat history to a competing service). The EU’s competition regulators have welcomed the proposal and are seeking feedback from other industry players over the next month before deciding whether to accept the deal. If regulators and rivals find Microsoft’s commitments satisfactory, it could conclude the investigation without a fine and avert formal charges. Notably, Microsoft indicated it would implement the unbundling and pricing changes globally, not just in Europe, if the offer is accepted.
WDTM?: Microsoft’s offer is a prime example of Big Tech adapting its business practices in the face of regulatory scrutiny- in this case, Europe’s tougher enforcement of competition rules. By potentially unbundling Teams, Microsoft is addressing concerns that it leveraged its dominance in productivity software to gain an unfair edge in the videoconferencing and chat market (especially salient as remote work tools boomed in recent years). For businesses and consumers, this could mean more choice and flexibility: companies could save money by opting out of Teams and perhaps use alternative collaboration tools without paying for unwanted extras. Competitors like Slack or Zoom stand to benefit from a more level playing field, where Microsoft can’t as easily use its Office monopoly to squeeze out rivals. Strategically, Microsoft’s willingness to voluntarily align its product offerings with EU demands is significant. Rather than fight and risk a hefty fine or formal ruling (the company has previously paid over €2 billion in EU antitrust fines), Microsoft appears keen to maintain a cooperative stance, perhaps to avoid reigniting U.S.-EU tensions or further investigations. This case also sends a broader signal: under the EU’s Digital Markets Act and other regulations, tech giants are under pressure to avoid tying products and to ensure interoperability. Microsoft’s proactive proposal might set a precedent for how other firms respond to EU competition probes- by offering remedies early to preempt stricter outcomes. In the big picture, it underscores that robust antitrust oversight can yield concrete changes in how tech products are packaged and sold, potentially fostering more innovation and competition in the software market.
7. Congress passes the TAKE IT DOWN Act (May 19th)
What happened: In a rare show of bipartisanship, U.S. lawmakers approved the Technological Abuse, Knowledge, and Enforcement in Networks (TAKE IT DOWN) Act, which aims to curb the spread of non-consensual intimate images (including AI-generated explicit deepfakes) on online platforms. The bill passed the House by an overwhelming 409-2 vote after clearing the Senate in February, and was promptly signed into law by President Trump. The new law empowers the Federal Trade Commission (FTC) to require swift removal (within 48 hours) of intimate images shared without consent and criminalizes their distribution across state lines.
WDTM?: This is the first major federal law addressing AI-driven “deepfake” abuses and online revenge porn. It also responds to growing concerns about the misuse of generative AI to create harmful fake content and fills gaps in existing state laws. Victim advocates hail the act as vital to protect privacy and dignity in the digital age. However, civil liberties groups have raised alarms that the 48-hour removal mandate and broad scope could pressure platforms into over-censoring content, including lawful speech, to avoid liability. By potentially requiring proactive monitoring (even of encrypted communications), the law pits privacy and free expression concerns against the imperative to crack down on egregious abuse. How the FTC enforces these provisions, and how tech companies implement rapid takedown systems, will set important precedents for balancing safety and speech online.
8. Scattered Spider Suspected in UK Retail Cyberattacks (May 21st)
What happened: A wave of ransomware attacks targeting UK retailers—including M&S, Co-op, and possibly Harrods—severely disrupted operations. Online shopping was halted, shelves went unstocked, and sensitive data of both staff and customers was stolen. For the first time, the UK’s National Crime Agency (NCA) confirmed it is investigating Scattered Spider, a loose cybercriminal collective known for targeting major U.S. firms. Unlike typical ransomware gangs based in Russia or North Korea, Scattered Spider is believed to consist of young, English-speaking hackers, many based in the U.S. or UK. They reportedly exploited IT help desks using social engineering to reset credentials and deploy malware through a toolkit called DragonForce.
WDTM?: The damage has forced major retailers offline and exposed systemic weaknesses in corporate IT defenses. It also raises new legal and enforcement challenges- cross-border, decentralized groups like Scattered Spider blur traditional lines of jurisdiction and attribution. For policymakers, this underscores the urgent need for coordinated cybercrime deterrence, stronger supply chain resilience, and accountability mechanisms for major firms handling sensitive consumer data. The UK’s retail and digital infrastructure is now squarely in the crosshairs.
9. Germany: Court Allows Meta’s EU AI Training Program (May 23rd)
What happened: On May 23, the Higher Regional Court of Cologne declined to block Meta Platforms from training its AI on European user data. A German consumer group had sought an injunction under EU data protection law (GDPR), arguing Meta’s plan to use public Facebook/Instagram posts as AI training data violated privacy. The court, however, refused the request, effectively green-lighting Meta’s program (with some safeguards like opt-outs). The ruling noted that Meta will notify EU users and give them a chance to opt out. Regulators in other EU states (e.g. Hamburg) have also warned Meta, but as of late May no penalty had been imposed.
WDTM?: I’ve been watching this case because it pits the EU’s flagship privacy regime (GDPR) against Big Tech’s race to train ever-larger AI models. It’s fascinating to think about this is an early test of which principle wins when privacy and AI innovation collide. By allowing Meta to proceed, the court signals that training AI on social-media data is legal under current EU rules—at least in Germany. The outcome may encourage other companies to push AI development. At the same time, privacy advocates warn that it could weaken Europe’s GDPR protections. The decision also shows how European courts are beginning to navigate AI issues. Globally, it adds to the debate on lawful data use for AI training: if major jurisdictions allow such training, it could shape how AI models are developed internationally.
10. Commission Threatens Apple With Fines for DMA Non-Compliance (May 29th)
What happened: On May 29, EU regulators announced that Apple is on notice to fix its App Store payment rules or face ongoing fines. Last month the Commission fined Apple €500 million for breaching the Digital Markets Act (DMA) by blocking developers from directing users to alternative payment options. This week’s decision warns that Apple has 60 days to implement changes or incur “periodic penalty payments”. The underlying issue is Apple’s reluctance to fully open its app ecosystem: the Commission found that Apple’s current policy (allowing one external link per app and charging a 27% fee on off-platform purchases) is still too restrictive. Apple has announced it will appeal, but the EU has signaled it is ready to hit Apple with fines of up to 10% of annual turnover if compliance is not swift.
WDTM?: This is a bellwether of the DMA’s teeth. The Commission is showing it will not tolerate half-measures from gatekeeper platforms. For Apple, a U.S. giant, it means new legal pressure in Europe to transform how its App Store operates. More broadly, it underscores that regulators will actively enforce new tech rules: compliance with the DMA (and eventually similar laws worldwide) is mandatory. For consumers and developers, it could lead to more choice and lower costs. For Big Tech, it reaffirms that bloc-wide digital regulations can impose hefty consequences.
11. U.S. Tightens Export Curbs on Chip Tech to China (May 29th)
What happened: In a move escalating U.S.-China tech tensions, the Commerce Department ordered companies to halt exports of certain high-tech goods to China without a license. The new restrictions target critical “choke point” items (including semiconductor design software and specialized chemicals, as well as some machine tools, fuels, and aviation equipment). Officials also revoked existing export licenses for some suppliers. While not an outright ban (as licenses may be granted case-by-case), the policy aims to stymie China’s access to advanced chip-making capabilities.
WDTM?: This is a major policy step in the ongoing tech cold war (or some may call it “arms race”) between the U.S. and China. Legally, it expands export control enforcement, and politically it reflects U.S. national security strategy to hinder China’s semiconductor development. The restrictions could disrupt global supply chains and impact U.S. companies (like EDA software firms) losing Chinese customers.
12: UK’s FCA Launches Live AI Testing Service for Firms (late May 2025)
What happened: In late May, the UK’s Financial Conduct Authority (FCA) announced a new AI testing sandbox for regulated firms. The pilot will let banks, insurers and other financial companies run their consumer-facing AI models in a supervised “live test” environment, with regulator support. The goal is to validate AI tools’ reliability before they go fully live, helping firms identify biases or errors. The FCA’s Chief Technology Officer said the initiative strikes a balance between encouraging AI innovation and ensuring consumer protection
WDTM?: This marks one of the first instances of a financial regulator actively providing an AI “safe space.” It signals that authorities are taking responsibility for guiding responsible AI deployment. For the tech industry, it sets a precedent: regulators are moving from passive rule-setting to proactive co-development of AI governance tools. More broadly, the initiative could influence global approaches to AI oversight – showing that collaborative testing can be part of a comprehensive AI governance framework, especially in critical sectors like finance.
13. Courts vs. Google on Their Monopoly (Throughout May)
What happened: After a Virginia judge found in late April that Google illegally monopolized key pieces of the online-advertising market, May turned into a sprint toward penalties: on May 2 the court set a remedies trial for September 22, 2025, ordering discovery to finish by the end of June; on May 6 the Justice Department demanded a structural fix, forcing Google to sell both its AdX exchange and DFP ad-server businesses to restore competition; Google hit back on May 13, telling the judge a breakup was “unworkable” and offering five-year conduct limits instead. Meanwhile, in a separate Washington, D.C. case over Google’s search monopoly, closing arguments wrapped up on May 30, with the judge weighing remedies that could range from banning Google’s default-search payments to an unprecedented Chrome browser divestiture—while also asking how any order should account for fast-moving generative-AI rivals. Together these May moves mean the fight has shifted from whether Google broke antitrust law to how hardcourts will swing the hammer. And they raise the very real prospect that 2025 could bring the first court-ordered break-up of a Big Tech business line in decades.
WDTM?: After decades of talk, structural break-ups are now finally on the table for Big Tech. Brinkema’s calendar shows the court wants a fix in months, not years. If DOJ prevails, Google could lose the very stack that underpins its online-ad dominance- an outcome unseen since AT&T in the 1980s. If the courts impose major remedies (like divesting parts of Google’s ad empire or altering its deals with phone makers), it could fundamentally reshape the digital advertising and search landscape, and open the door for competitors. For consumers and businesses, that might mean more choice (for instance, easier use of alternative search engines or ad platforms). More broadly, these cases set the legal precedent that existing antitrust laws can be applied to digital markets- this may embolden further actions against other tech firms in areas like app stores or e-commerce.
14. EU Sues Member States and Pressures X under the Digital Services Act (Throughout May)
What happened: The European Commission ramped up enforcement of its new Digital Services Act (DSA), which pushes sweeping content moderation and transparency obligations on online platforms. Brussels sued five EU member countries for failing to implement the DSA on schedule. In a statement on May 7, the Commission announced it is referring Czechia, Spain, Cyprus, Poland, and Portugal to the EU’s Court of Justice for not establishing the required national oversight measures. These governments had missed DSA obligations such as appointing an independent Digital Services Coordinator and enacting legal penalties for online platforms that violate the DSA. This move to drag member states into court underscores the EU’s determination that its own laws be uniformly enforced across the bloc.
WDTM?: These developments illustrate the EU’s resolve to aggressively enforce its landmark platform governance law, which seeks to make the internet safer and more accountable. Suing member states over DSA implementation signals to European governments that Brussels won’t tolerate foot-dragging: it’s a strong message that the new rules (like appointing watchdogs and setting fines) are mandatory, not optional, and must be in place quickly. This is critical because uneven enforcement by member states could undermine the DSA’s effectiveness; the Commission’s legal action is a push for consistency and rigor in supervising tech platforms Europe-wide.