Introduction
Platform manipulation refers to the activity of malicious actors
who use social media platforms to deceive users.
It is implicated in a wide range of online activities—from online romance scams involving celebrity impersonators
to elder abuse whereby victims lose their life savings by “investing” with fraudsters.
Much to the chagrin of social media executives,
malicious actors identify and communicate with victims through reputable social media platforms like Facebook, Instagram, and Match.com, as well as non-social media platforms like Amazon and Cash App.
In doing so, these actors exploit the functions and features that make online platforms attractive digital spaces to begin with.
Platform manipulation creates irreparable harm to individuals from all walks of life. For starters, it creates tremendous financial harm. Platform manipulation is part of a booming multibillion-dollar industry in the United States.
In 2022, fraudsters stole over $137 billion from Americans,
and those over age sixty lose approximately $28.3 billion from scams each year.
Successful scams that involve “deepfakes,” such as artificial intelligence (AI)-generated nude images of minors, can also create long-lasting reputational and psychological harm to victims.
In some instances, victims have attempted to rob banks for their scammers.
One man in Ohio killed an Uber driver who he wrongfully suspected of involvement with a scam.
At a meta level, platform manipulation poses many implications for a global society: Democratic discourse necessitates the kind of trust that online scammers extract from public spheres.
Platform manipulators rely on the core fabric of social media platforms—their user interfaces (UI) and user experiences (UX)—to operationalize and scale their exploitation.
These actors use platforms to identify and initiate communication with their targets.
They also leverage platforms to expand their operations, test new tactics, and hone their craft, often flying under the radar of platforms’ content detection systems.
Platform designs take many forms and can serve discrete goals. For example, platforms make design choices on how to display features; hiding the “reply all” feature can reduce accidental mass replies, while hiding the number of digits in passcodes can provide additional security. Though some social media companies have adopted platform designs that mitigate harms like cyberbullying and misinformation,
broadly, social media companies offer limited features to address scams and other kinds of platform-based deception.
Meanwhile, it is exceedingly difficult for scam victims to get in touch with customer service personnel who could be positioned to assist them.
Payment provider platforms used by malicious actors to receive money from victims have been woefully unable to curb this problem, which often originates on social media platforms.
In recognition of the complexities of platform manipulation, some companies have begun to initiate voluntary commitments to “shar[e] insights and knowledge about the lifecycle of scams” with the goal of educating users on what to look out for.
While these efforts are positive developments, they at best indicate a growing recognition that social media companies lack direction when looking to design their platforms in ways that limit harm caused by the ballooning scam economy.
At worst, social media companies’ short-term profit incentives directly converge with those of the malicious actors on their platforms.
It is also worth noting that social media users are better able to participate in the economy and generate advertising revenue when their funds are not siphoned into scammers’ accounts.
As major platforms cobble together written policies to address platform manipulation,
companies face few legal restrictions on the design choices that render their platforms attractive breeding grounds for scammers.
In the absence of binding legal obligations on social media companies, malicious actors are free to play platforms like instruments of manipulation.
Existing legal frameworks constitute a patchwork of schemes that provide state and federal enforcers and citizens few chances to have their injuries heard, let alone to vindicate their rights and pursue remedies.
Innovative litigation strategies, such as the application of false advertising claims by private plaintiffs and the Federal Trade Commission (FTC), are stopgap solutions that have not steadied the problem.
The cornerstone of social media law, Section 230 of the Communications Decency Act of 1996, as well as First Amendment law and consumer law frameworks, all either fail to provide recourse to social media scam victims or fail to explain legislative inaction in the face of the causal relationship between platforms’ design choices and the scams that transpire on those very same platforms.
Furthermore, maladaptation of § 230’s immunity for platforms has created an inaccurate presumption of immunity for all choices, including design choices, made by social media companies.
This Note is the first to argue for a social media liability paradigm that centers platform design choices: a Platform Design Negligence (PDN) paradigm that establishes the circumstances for a clear assumption of liability in this digital environment. It offers a roadmap for an evolution in law and society towards coherent parlance on the impacts of twenty-first century platform technologies. Social media companies should face liability when their design choices contribute to the deception of their users. When companies are aware of these deception risks and fail to take reasonable precautions, they cease to function as reasonable platforms and should become liable for injuries that follow. Through a full-throated adoption of this paradigm, victims and law enforcers could hold social media companies accountable for harms caused by manipulation conducted on, by, and through their platforms. Both federal and state courts, without the mandate of a statute, can actualize this paradigm by applying and building upon existing common law tort doctrine.
In Part I, this Note surveys the landscape of platform manipulation, discussing the harms caused by platform-based deception as well as the design choices that enable platform manipulation in practice. It also explores how social media companies profit from the scam economy. Part II turns to the absence of legal frameworks that apply to social media companies’ design choices in the context of platform manipulation. It underscores the relationship between platform design and platform manipulation. It also delineates the pitfalls of the prevailing voluntary self-governance paradigm for platform manipulation. Finally, Part III introduces the PDN paradigm that can serve social media companies, lawmakers, and victims as they pursue legal remedies and design inter-ventions that curb the growing challenge of platform manipulation.