When the Platform User Fights Back (with Software)

RICK BARBER
Researcher, Crowd Dynamics Lab and Social Spaces Group, University of Illinois Urbana-Champaign

HARI SUNDARAM
Professor of Computer Science and Director, Crowd Dynamics Lab, University of Illinois Urbana-Champaign

It wouldn’t be hard to get whiplash from reading technology news headlines lately. On the one hand, the tech sector has matured. Its businesses now top the list of companies with the greatest total-dollar market value and most lobbying dollars spent [1]. Prosperity, however concentrated, and progress, however narrowly construed, seem to be on the march—with Silicon Valley positioned as the new avatar for both. But then there is the constant flow of tech-sector scandals around data privacy, misinformation, and labor issues, to name a few, all seeming to impact individual users. Despite this tension between booming financial returns and the upheavals that have resulted everywhere else, we are told the needle inevitably bends toward “progress.”

Our collective migration online offers much value at our fingertips. But at the individual level, there is a large asymmetry of power between users and the platforms we interact with online. We feel this differential acutely when we try in vain to limit our usage of a favorite app, opt out of pervasive tracking, or search the marketplace for products that aren’t designed with the intent to do both. The powerful market forces pushing company valuations ever higher come into conflict with our self-determination, and looking at market capitalizations in the hundreds of billions and lobbying budgets in the tens of millions, it’s hard not to draw the conclusion that we’re helpless in this particular fight; that losing our agency is inevitable.

bs0.jpeg

But of course it’s not. For one, regulation is an obvious and important remedy for conflicts between what we want out of our technology and what actually occurs alongside the industry’s pursuit of ever more profit. For now, however, regulation is slow to come and when it does, we should all be wary of the tech sector’s ability to adapt. , All that being said, even today users do enjoy certain structural advantages against tech giants if they are interested enough to find and use the right tools. Amid engineered addictiveness, ubiquitous tracking, and the market structure resulting from data network effects, all of which limit choices a user might otherwise make in interacting with technology platforms, a number of software products already exist that can serve as an adjunct to regulation for restoring user autonomy.

To see how, consider someone interacting with Facebook. The user may approach the interaction with the goal of limiting their time spent on the platform to just five minutes, while Facebook’s objective might be to keep the user’s attention for as long as possible so it can both sell ads against it and collect more user behavior data. With these intentions in conflict, what is it about this interaction that makes its outcome reliably favorable to Facebook’s long-term goals? Both parties have the ability to deliberate beforehand about what they wish to get out of the encounter. However, when the interaction actually occurs, only one side (Facebook) invests considerably in delegating their participation to software. Therefore, Facebook’ isn’t of two minds—one impulsive and one deliberative—potentially in conflict. Its goals are compiled into the software stack that executes the non-human portion of the interaction, leaving behind a set of data it then carefully measures for insight into how to tweak its interaction strategy to get even more of what it wants in future encounters.

The human user, on the other hand, usually has few options beyond approaching the interaction extemporaneously. They may be using a piece of software called an ad blocker, which they downloaded after some considered judgement about online tracking, but ultimately the question of whether their more deliberate goals will be achieved in the interaction depends largely on how well they can keep these goals in mind as a stream of temptations scroll by.

bs1.jpeg

It needn’t be this way, however. With the right tools, aimed at allowing users to define the terms of the interaction before it actually happens, users can interact with online software as rationally as Facebook does with them, both in terms of biasing the outcome toward their own goals and by generating auditable data about the interaction which might be used for their own benefit.

Overall, we as users have three advantages that can help us augment our own agency online. First, the idea that we might use software to implement our will when it’s in conflict with the platform’s is not a serious part of the threat model for any website or application currently in existence, even if might be forbidden in its terms of service. The platforms assume that the user is non-adversarial. Second, the software model for interacting with these companies actually privileges the user as an interactant. That is, when we interact with an online platform, we’re usually given some data and code to run. In most contexts the client software we’re using (for example, a web browser) runs at a higher privilege than the platform’s code [2]. This means that as users we could ignore the platform’s directives and run alternative code on the data we are given. This simple observation leads to a number of potential strategies users might employ, as we discuss below. A final advantage comes from the high-leverage distribution economics of software. Sophisticated users can mine interactions for opportunities for software intervention, package the intervention into a browser plugin or app, and make tools available to less sophisticated users.

Hijacking Design to Curb Addictiveness 

Our own deliberations wage an eternal battle with our impulses. In the online economy, impulse has a reliable ally in the advertising business model, while deliberation mostly fends for itself as a rogue state. Many of us would like to spend less time online but find this choice inadequately supported by our online services. Business models are upstream of product design, which is upstream of individual and collective behavior online. That the business model for many services involves the capture of attention means that many online businesses need their applications to be addictive. The way toward realizing this goal lies in the design of the interaction experience.

Several design patterns in common use make it easier for impulse to dictate our platform usage, including the employment of certain colors, notifications, and responsiveness achieved by technical means like prefetching content and infinite scrolling. Every one of these design patterns is implemented in practice, however, as a suggestion by the platform for what code to run on our own devices. For websites especially, it isn’t actually difficult to build software that disregards default designs optimizing for the company’s objectives and instead provides affordances for the user’s carefully deliberated goals, helping to restore their autonomy.

In fact, we recently developed a browser plugin called DeFacebook that progressively degrades the user experience of Facebook as a user exceeds their desired usage. When the time comes,  the plugin washes out colors, makes text more and more difficult to read, and loads new content slower and slower. Through these efforts we hope to help a user associate usage with frustration, right there in the moment of the interaction where the conflict between impulse and deliberation is decided. We believe this approach will have greater efficacy in curbing usage than applications which merely block the site, though this is still under investigation.

bs2.jpeg

HabitLab is another recent research project and plugin that provides an extensible ecosystem for easily authoring and sharing interventions of this nature [3]. This approach reduces the burden required to create new interventions allowing more developers to explore strategies that a user might employ to get more of what they want out of their online interactions. Inserting pre-programmed interventions or modifications into the interaction experience before it happens restores some measure of user freedom, or an ability to author the terms of the interaction that the platform otherwise exploits for its own ends.


Interacting Adversarially with the Online Tracking Apparatus 

Behaviorally targeted advertising and the tracking associated with it limit a user’s ability to choose how much of their online behavior to disclose to others. There is, of course, one obvious option for combating this: to install ad or tracker blockers. However, the interaction footprint of a user who has done so is conspicuous, and given the forever expanding toolkit of the trackers [4], a blocker needs to be much more comprehensive than most are in order to do the job thoroughly. Even if a blocker manages to anticipate all known strategies for tracking, the monolithic logic of the blocking intervention is easy for trackers to square off against. The tracking ecosystem only needs to find a new trick to render blocking—and therefore the user’s choice of how much to disclose—temporarily ineffective.

Other approaches have a more ecological robustness in that they present an expressive and extensible logic for intervention, as in the HabitLab example above. In particular, many researchers have worked on ways to help a user obfuscate their activity online, for example, by adding random noise to their browsing behavior or by padding their own browsing behavior with that of others’ [5-6]. A drawback of these approaches is that they incur a large computational cost in the overhead of how much noisy or spoofed behavior is required to get a good result. Still, they provide a useful starting point for thinking about how to more covertly achieve the goal of being unknowable online.

Like before, we think there is promise in manipulating the point of interaction to help a user achieve their desired outcome. Given that much of the code for tracking and advertising is actually executed in the users’ own web browsers, executing this code with plausible, adversarial inputs or in unintended ways may allow users to obfuscate more effectively and efficiently than blocking trackers or using obfuscation that treats the tracking logic as a black box [7]. The science of plausible, adversarial inputs which fool inference machinery in an optimal sense is currently booming in the field of adversarial machine learning [8]. Researchers have discovered that sophisticated learning algorithms are reliably fragile when what is presented to them is carefully manipulated. Applying insights from this field to the problem of tracking prevention will allow us to exercise more choice over what to disclose.

Network Effects and Data Marketplaces 

The impositions on our autonomy online come from businesses which have one thing in common: an extraordinarily high-profit business model based on having cornered the market on certain sets of data crucial for targeting advertising. The advantages of having exclusive access to these troves of data make competition a nonstarter—go try and get a search engine funded in Silicon Valley these days and you’ll find it’s nearly impossible. Not only that, these profits create a palpable sense of opportunity cost for many consumer-focused venture capitalists and entrepreneurs who are considering building a product that doesn’t monetize with advertising. This reduces experimentation with consumer software products that aren’t based on monopolizing a segment of behavioral data and also limits collected consumer data from flowing to potentially more productive uses. This totalizing logic both limits and obscures the chance to choose freely in the marketplace, as either a consumer or even as an innovator.

It seems odd, however, that a non-rivalrous good like user data should be subject to market capture. Interacting with a product like Google search generates data exhaust about that interaction. This interaction data is valuable to Google as it allows the company to improve its user-facing and advertiser-facing products, in the former case making higher quality, stickier services for the user and in the latter, making that user’s attention generate more revenue. The fact that no one but Google receives the details of my interaction with its service is a source of its competitive defensibility. As users, though, we could collect and donate this data to an alternative search engine or researchers, or sell it to someone who thinks they might put the data to profitable use.

As an example of such a transaction, it was recently reported that Facebook and Google had been compensating users with gift cards if they downloaded phone applications that monitored the data exchanged at the point of interface between the user and other companies [9-10]. In the Facebook case, the application would “collect information such as which apps are on your phone, how and when you use them . . . collect information about your internet browsing activity” and “collect this information even where the app uses encryption” [11].

bs3.jpeg

The fact that this data exhaust is economically valuable and may have uses beyond those that the platform can employ is the impetus for recent arguments in favor of treating data as labor [12] and allowing it to be priced in a marketplace so that it may flow to productive uses. It is presupposed but not proven that the marginal value of this kind of data is zero, but compelling arguments are being made that suggest this might not be so [13]. The means for collecting this data are wired into the way we interact with the companies who currently monopolize it. Combined with more effective obfuscation, as discussed earlier, software which intelligently regulates the point of interaction between users and their online services might allow users to more decisively say with whom they will share data and on what terms. For implementation, again expanded functionality could prove key. We might see a general-purpose piece of software which allows users to install “collection routines” for pay which aggregate client data of interest. Transparent permissions and market incentives might encourage data collectors to ask only for data they need. If obfuscation reaches a critical point of effectiveness, even the current data Goliaths may have to come to the table and renegotiate the terms of their relationship with us.

Allowing captured data to flow into a marketplace would stimulate the development of tools for protecting user privacy and lead to a larger variety of products that make use of data and, somewhat paradoxically, those that don’t. The difference in profitability between making a product which extracts value from data at the margin, and a product that doesn’t is huge. Subjecting the flow of data to the marketplace would force these monopoly profits into competition, increasing the variety of ways consumer software firms choose to monetize. Market choices which would today be profitable, but not profitable enough to attract notice by venture capitalists, the tech press, and engineering talent, will fare better when they compete with this suddenly more modest data business model.

We think there are many opportunities for increasing user autonomy with software. When a user’s considered goals are in conflict with those of the platform with which they are interacting, the model of interaction often leaves room for the user to refactor the nature of the interaction with their own software. There are challenges, however. The problem of finding and implementing an intervention is easiest to solve for the web since the web browser provides APIs with greater privilege than the web platform’s code. Mobile platforms present many more challenges since interaction is often mediated by a client app owned by the platform one is interacting with. However, the mobile operating system still offers some opportunities to interpose software that acts on behalf of the user, as evidenced by the Facebook and Google data collection apps discussed above.

Furthermore, all of the approaches we cited require that some fundamental work be done. A better understanding of the relationship between design and addictiveness will be necessary to use software to block addictive design patterns. Adversarial machine learning insights will need to be adapted somewhat to be effective against the way online tracking systems make inferences and the limited amount of information they report to the user on the inferences that are made. Lastly, it might be desirable for data marketplaces to have a common privacy-preserving software protocol but a decentralized design.  Designing a system which doesn’t require a trusted third-party to host and interpret transaction data would protect the data from prying eyes not involved in the transaction itself. Building such a system will require deep study of both technical and economic mechanisms, which may be prone to unintended consequences.

A final challenge is that increased adoption of these kinds of tools might make the point of interaction a more thoroughly contested space than it is today. What works now might break when the platforms begin to defend against empowered users, which is why we’ve emphasized extensible solutions that would present a varied frontier to defend against. Should a software-based arms race occur, however, the structural advantage goes to the user [14].

 

Endnotes

1. Shaban, Hamza. “Google for the first time outspent every other company to influence Washington in 2017,” January 23, 2018, available at https://www.washingtonpost.com/news/the-switch/wp/2018/01/23/google-outspent-every-other-company-on-federal-lobbying-in-2017/

2. Grant Storey et al. “The Future of Ad Blocking: An Analytical Framework and New Techniques,” May 24, 2017, available at https://arxiv.org/abs/1705.08568.

3. Geza Kovacs, Zhengxuan Wu, and Michael S. Bernstein, “Rotating Online Behavior Change Interventions Increases Effectiveness but Also Increases Attrition,” Proceedings of the ACM on Human-Computer Interaction 2, No. CSCW, Article 95 (November 2018), available at https://hci.stanford.edu/publications/2018/habitlab/habitlab-cscw18.pdf.

4. Nick Nikiforakis et al., “Cookieless Monster: Exploring the Ecosystem of Web-Based Device Fingerprinting,” 2013 IEEE Symposium on Security and Privacy, June 25, 2013, available at https://ieeexplore.ieee.org/document/6547132.

5. David Rebollo-Monedero, Jordi Forne, and Josep Domingo-Ferrer, “Query Profile Obfuscation by Means of Optimal Query Exchange between Users,” IEEE Transactions on Dependable and Secure Computing 9, no. 5 (September/October 2012): 641–654, available at https://ieeexplore.ieee.org/document/6138865.

6. Martin Degeling, and Jan Nierhoff, “Tracking and Tricking a Profiler: Automated Measuring and Influencing of Bluekai’s Interest Profiling,” in WPES’18 Proceedings of the 2018 Workshop on Privacy in the Electronic Society, October 2018, 1–13, available at https://dl.acm.org/citation.cfm?id=3268955.

7. Sai Teja Peddinti and Nitesh Saxena, “On the Privacy of Web Search Based on Query Obfuscation: A Case Study of TrackMeNot, in PETS’10 Proceedings of the 10th International Conference on Privacy Enhancing Technologies, July 2010, 19–37, available at https://dl.acm.org/citation.cfm?id=18811537.

8. Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio, “Adversarial Examples in the Physical World, technical report, Google, Inc., 2016, available at https://arxiv.org/abs/1607.02533.

9. Josh Constine, “Facebook Pays Teens to Install VPN That Spies on Them,” TechCrunch, January 29, 2019, https://techcrunch.com/2019/01/29/facebook-project-atlas/.

10. Zack Whittaker, Josh Constine, and Ingrid Lunden, “Google’s Also Peddling a Data Collector through Apple’s Back Door,” TechCrunch, January 30, 2019, https://techcrunch.com/2019/01/30/googles-also-peddling-a-data-collector-through-apples-back-door/.

11. Josh Constine, “Facebook Pays Teens.”

12. Imanol Arrieta Ibarra et al. “Should We Treat Data as Labor? Moving Beyond ‘Free,’” American Economic Association Papers & Proceedings 1, no. 1 (2017), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3093683.

13. Ibid.

14. Grant Storey, “The Future of Ad Blocking.”

Dipayan Ghosh