-
2020-11-14
On Tim Hwang’s book: Subprime Attention Crisis
My friend Doc Searls has been talking about this book repeatedly in recent months, as have many others interested in rolling back surveillance capitalism, improving privacy and user agency, and cleaning up the unholy mess that on-line advertising has become. Finally I have read the book, and here are a few notes.
Tim Hwang makes three core points:
- Programmatic, on-line advertising is fundamentally, irredeamably broken.
- It’s not a matter of whether it will implode, but just when.
- Apply the lessons from the 2018 subprime mortgage crisis: advertising inventory is a different asset class, but the situation is fundamentally the same: eroding fundamentals in the face of an opaque, overhyped market, which will lead to a crash with similarly major consequences when it occurs.
I buy his first point. I mostly buy his second, but there are too many important differences with the market for collateralized mortgages in 2008 for me to buy his third. Ultimately that parallel isn’t that important, however: if he’s right that programmatic on-line advertising is headed for something dramatic, whether it’s like 2008 subprime mortgages or some other crash doesn’t matter in the end.
Why would anybody say programmatic, on-line advertising is broken? He has many examples, go read the book, but let me mention my personal favorite from personal experience: ads, to me, on Spotify:
-
Spotify, for a long time, advertised joining the Marine Corps to me. I should be flattered how young, vigorous, and gung-ho they consider me, but hmm, I don’t think so. This must be because they have some wrong data about me, and while Spotify got the Marine Corps’ money all the same, the Marine Corps totally wasted their spend.
While this example is particularly egregious, Hwang has many other examples, which argue that this is a major and pervasive problem.
-
I recently downloaded the personal data Spotify have about me, as I can because we have the CCPA in California. Looking at the advertising subjects they have tagged me with, guess what?
It was worse than I was afraid of. I loaded the tags into a spreadsheet, and categorized them into three groups:
-
Interests I definitely have. Example: “Computers and software high spender”. Guilty as charged.
-
Interests I definitely do not have. Example: “March Madness Basketball Fan”. What? Never watched basketball in my life. I don’t actually know what “March Madness” might even be and I’m disinclined to look it up.
-
Interests that I might or might not have, Meh so to speak. Example: “Vitamin C category purchasers”. Maybe I bought some one day. I don’t remember.
How do you think these categories break down? The majority (30/66, almost half) of tags Spotify has about me is in the Meh category. Will I buy more Vitamin C if they advertise it to me? Maybe, but quite unlikely. Consider the ad spend money in this category mostly wasted on me.
But this is the kicker: 24 of the remaining tags were “definitely not” and only 12 were “definitely yes”. Twice as many categories about me were absolutely wrong as were correct!!
Only 18% of the total categories were clearly correct, and worth spending ad money on to target me.
Eighteen.
From the name of the tags in the Spotify export, I guess most of them were purchased from third parties. (Makes sense: how would Spotify know I’m interested in Vitamin C, or not?) In other words, 18% of the data they purchased about me was correct, 36% incorrect, and the rest more or less random. No wonder Hwang immediately thinks of junk mortgage bonds with numbers like these.
But as he points out, advertisers keep spending money, however. Why? I suggest the answer is very simple: because of a lack of alternatives.
If you stop advertising on-line, what are you going to do instead? As long as there isn’t a better alternative, it’s a better plan to pinch your nose and go to your CEO and say, yes, I know that today, not just half but a full 82% of our advertising money is wasted, but it’s better to waste all that money than not to advertise at all. I can understand that. Terrible, but reality.
So, for me, the more interesting question is: “How can we do better?” And I think the times are getting ripe for doing something better… stay tuned :-)
-
2020-10-09
Three Scenarios for Rolling Back Surveillance Capitalism
Are we stuck with Surveillance Capitalism? I hope not.
But what are realistic alternatives? Alternatives that keep the amazing wonders that are consumer technologies in 2020, but don’t invade our privacy, don’t spread misinformation, give us back a measure of control over our electronic lives, don’t set us up for manipulation and help rather than hurt our mental health?
Here are three scenarios how we could get out.
Scenario 1: Regulation Bites
Building on the success of GDPR and buoyed by a growing data sovereignty movement supported by the political right and left, the European Union intensifies regulating cyberspace, and in short order:
- disallows all businesses to move any personal information pertaining to its residents to data centers outside of the European Union;
- broadly disallows user tracking except for very narrow circumstances; in particular, cross-site and cross-app user tracking becomes prohibited; advertising networks cannot target audiences smaller than 100,000 members any more;
- requires all social and communications apps to implement full data portability (including loss-less transmission to a new provider) similar to phone number portability.
The dominant, American social networking giants focus their efforts in the courts to roll back these regulations, but in the meantime, nimble European upstarts simply copy the feature sets of the dominant platforms and implement them consistent with European regulations. Local politicians mention these apps at every opportunity.
By marketing their products through schools, privacy-conscious German parents switch over an entire new generation of users to the European apps, and when e-government initiatives enable citizens to much more easily and securely interact with governments through the new apps, the network effect starts hurting instead of favoring the American surveillance platforms.
As integration has become easy, a European startup figures out how to game-ify fact checking on this new open platform, and on-line misinformation drops rapidly. This increases user engagement and user confidence, and few people ever want to go back to the old apps.
Other countries outside the EU concerned about data sovereignty have been watching carefully and quickly follow the European model, through regulation and targeted industrial policy. Facebook and friends are playing catch-up and are forced to play by the new rules to keep at least some of their user base in those countries.
And when they started to market their apps internationally, even large swathes of the American population moved over, because they don’t want to be surveilled either.
Scenario 2: A Global Disinvestment Campaign Leads to a Vibrant Good Technology Market
With the slogan “Facebook is just as bad as burning oil”, digital rights activists have partnered with veterans of the divestiture campaigns against South African apartheid, tobacco and fossil fuels for an international public relations campaign targeting investment and retirement funds that invest in companies monetizing surveillance.
Being reminded of the impact of previous disinvestment campaigns and sensing a business opportunity, fund managers globally are rapidly rolling out new niche funds that promise to only invest in companies that use personal data responsibly. Their initial target markets are minorities and parents saving for retirement who are concerned about their kids’ safety when using technology.
Upstart VCs jump on the opportunity that this new, focused capital represents and funnel it via special-purpose “Good Tech Only” venture funds to eager entrepreneurs world-wide to build next-generation social networking, commerce and virtual/augmented reality companies, without fear that VCs will pressure them to monetize customer data anyway when the company hits a difficult patch.
Having made a clean break from the surveillance business model, these upstarts are able to innovate rapidly both on business model and technology. For example, enabled by new business models, interoperability with other vendors has now become a value driver rather than a leak in the enterprise’ moat. This completely changes the dynamics of the marketplace.
As a result, entirely new product categories no longer prevented by vendors’ data hoarding strategies explode on the scene, including, for example, much better targeted advertising because users can volunteer personal data without fear of privacy violations, proactive maintenance of consumer products by an army of service providers no longer inhibited by hermetically sealed cloud castle products, and far more reuse and upcycle of previously discarded products.
As the Good Tech brand rises, and unprecedented features become available, more and more technology users are willing to make a clean break with surveillance legacy platforms, and shame their friends to move from the legacy social networks into moving to Good Tech as well.
Ultimately, the legacy vendors practicing surveillance capital face shrinking users bases, less access to capital, and structurally cannot compete with the new generation of Good Tech companies.
Scenario 3: Frustrated Users and Open-Source Developers Start Cooperating for Mutual Benefit
It started small, with a few technically-competent digital rights activists pooling their expertise and a little bit of money to operate their own Mastodon server, so they could stay in touch just like on Twitter, but without an unaccountable third party in the loop. (Note: this, of course, already has happened; there are many Mastodon deployments like this all around the world, some of which have already progressed further along the lines outlined below.)
As interest and user numbers grew, the previously informal collaborations started to be formalized: users not contributing their labor would pay a monthly fee, from which systems administrators would be paid to keep the deployment up and running reliably. Over time, the initial collaborative decision making process for the project morphed into a formal cooperative governance structure in which all stakeholders – users and maintainers – have equal rights. They decided on all matters affecting the project democratically, although different cooperatives employ different styles of governance including direct, liquid and representative democracy.
Soon users started to ask for additional tools provided to them in a similar manner, like document sharing, calendaring, e-mail, and more. Accountants would ask: “Microsoft charges me $6.99 per month to access Excel. If I pay the same amount to the coop, can’t we host something like Excel ourselves, and I can be certain that my clients’ financials stay private instead of whatever Microsoft does?” Some other users in the coop declared that they had similar needs, and banded together, money in hand, to fund a project. Which attracted open-source software developers who committed to porting open-source collaborative document editing software into the cooperative’s environment and keep it maintained for a monthly fee paid for by its users.
Of course, the apps operated by the various cooperatives always interoperated, because that’s what users want and no vendor subject to the coop’s rules has the opportunity (or desire) to lock in anybody. So leaving one cooperative to join another became as simple as moving banks today, with no money or data lost in the process.
Some projects didn’t work out. Some money was wasted. Some coops imploded. Some users left because initially, the quality of the coops’ products was below the quality of social networking products of today’s dominant internet platforms funded by billions of Wall Street dollars. However, because the cooperative structure relates the needs and wants of the users directly to the revenue opportunity of the vendors, with no independent shareholders to satisfy, ultimately the match between needs and features became much better than in pure capitalistic for-profit models, creating legions of fanatically happy users and profitable vendors completely outside the need or desire for surveillance capitalism.
Some final thoughts
Of course, there are other scenarios; elements of these scenarios could be combined in different ways or shake out differently, and predictions are hard, particularly about the future :-)
But there are people working on each of those scenarios today (myself included!), and it is not obvious to me that those projects are doomed. In other words, they have promise! How can we help them be more likely to succeed? Because I want out from surveillance capitalism, and chances are, you do, too!
(Please get in touch.)
-
2020-10-01
Categorizing social connections
Social networking websites categorize the people I might be related to into:
- those I am (bi-directional) “friends” with;
- those that I follow (but they don’t follow me back), and vice versa;
- those that are related to my friends, but not me directly;
- those whose stuff the social network overlords think you should know about it (that includes advertisers)
- everybody else.
This categorizing seems contrary to the way non-dysfunctional human relationships work. What is a better categorization? Here is my version:
-
My close family and friends. These are the people have the key to my house; they live in my house or are welcome all the time, and will help out no matter what the time of night it might be. I’ll call them Family, whether we have common genes or not.
-
Next are the people that I know well, but who I wouldn’t normally give a key to my house. I know them well because I have worked with them, I have had fun with them, I know a bit about their life history, I have some idea about their families and what they worry about. Traditionally, that would be an actual village but in times of the internet, people may live far away. It may include my neighborhood, my congregation, my political faction, or people I have done projects with. I call them my Village.
-
Beyond that is my Tribe. That’s people with whom I share a clear interest, although I may know many members of my tribe only by name, or not at all. This would include people of the same faith or Weltanschauung as I have, people in the same town, of the same profession or political persuasion with a similar value system.
-
And there is everybody else, the World.
There may be one more tier between tribe and world, sort of on a country level. Other than by allegiance to the same government (or grudging acceptance of the same government?), I’m having some difficulties to define this tier, so I’m leaving it out of this post.
Worth noting is that I can be member of several of those groupings. For example, I can be member of several Villages (my home town vs my college town) or Tribes (say, my politics and my passion for electronics). Or maybe the better way of looking at it would be to use this categorization only from my point of view; what is a single Village to me may not be a single Village to anybody else.
I’m sure social science has lots of categorizations like that. Not being a social scientist, what am I missing in this home-grown version?
-
2020-07-16
What if COVID-19 doesn’t end?
All the discussion has been around how to limit new infections, and how to cope for as long as COVID-19 has no cure. It’s been terrible enough, particularly in shockingly incompetent countries like the US. We’ve consoled ourselves with the hope that even if things are terrible now, science is working hard on a vaccine, and even if it takes a year or more, one day we will have one, we get everybody injected, and then the pandemic nightmare is over and we can go back to normal.
Based on recent news, we may have to rethink that plan. If it indeed turns out that antibodies disappear within months from infected people, chances are that antibodies will also disappear from vaccinated people. Which would make vaccination, should a vaccine be found, only effective for a few months.
Which would mean you’d have to re-vaccinate every, say, 6 months. In a country the size of the US, you’d have to vaccinate, say, 200 million people (herd immunity) every 6 months, which means 2 million vaccinations every single business day.
I don’t think that works. And it certainly does not work in poorer countries with less infrastructure.
So there is now a real possibility that COVID-19 will not go away. And even if we were willing to accept the death rate on a permanent basis, it’s hard to believe we could accept the 10x larger number of permanently injured people that results.
Which would mean that the current state of affairs would become permanent:
- severely curtailed long-distance travel, with long mandatory quarantines upon entry;
- no large / mass events ever again;
- far less in-person contact than we’ve been used to as a species;
- contact tracing and mandatory quarantines as core functions of government, and probably not very gentle ones at that;
- fewer vacations, with a much smaller range of possible vacation activities;
- entire industries become non-viable.
Certain? No. Possible? Entirely. We may want to start considering this as a real possibility.
-
2020-07-11
Uncommon, worthwhile insights on AI and the climate emergency by Cory Doctorow
Published in Locus Magazine. Here are some selective quotes with comments.
I am an AI skeptic. I am baffled by anyone who isn’t.
I don’t see any path from continuous improvements to the (admittedly impressive) “machine learning” field that leads to a general AI any more than I can see a path from continuous improvements in horse-breeding that leads to an internal combustion engine.
Yep. Let me add that everybody I have encountered who is hyping a future AI nirvana has something to sell. So let’s treat it like the claims of any salesman who sells miracle cures.
Remediating climate change will involve unimaginably labor-intensive tasks, like relocating every coastal city in the world kilometers inland, building high-speed rail links to replace aviation links, caring for hundreds of millions of traumatized, displaced people, and treating runaway zoontoic and insectborne pandemics.
These tasks will absorb more than 100% of any labor freed up by automation
Putting aside whether we can “remediate climate change” (given where we are, IMHO we can now at best hope to adapt to it somehow), he’s absolutely correct that whatever we attempt to do, is going to be immensely labor-intensive. And because it’s all new and hasn’t been done before, it cannot really be automated, as automation is fundamentlly about having machines repeat the same thing over and over again. (If you agree with him, as I do, that a general AI is a looong time away and certainly will not arrive in time to solve this crisis for us; it should have arrived 30-50 years ago then.)
… the locus of the problem with technological unemployment: it’s not a technological problem at all, it’s an economic one.
Exactly. If technology suddenly took the jobs of gazillions of people, it’s not like there is nothing that the world needs to get done any more. Look around you: there are tons of things that should get done, from sweeping the street you live on more often to spending quality time with foster children or teaching people online the basics of science so they won’t fall for as many hoaxes.
Our current economic system always has had this very baffling feature of sending people home to watch TV when the economy tanks, instead of making us all work much harder to get the economy back out of the hole it is in! It’s the opposite of what should happen!
And as Cory says, the reason why this happens is because private sector employment is correlated with economic success. Not with the number and size of the problems to be solved. This is particularly important because many believe, myself included, that any realistic attempt to deal with the crisis will have to accept some form of economic de-growth, aka shrinkage. While creating a lot of work.
when the pandemic crisis is over, 30% of the world will either be unemployed or working for governments.
Very possibly so. But unlike what Cory implies, there will also be a significant (30%? Similar to the unemployment rate?) downturn in income. That’s because if 30% of people don’t work, or work in jobs for which no business model exists, they don’t produce things that others will pay for, and so we are all commensurately poorer. Money printing may make this effect less visible in prices, but as he says, money printing does not produce the things we need or want.
Here’s the full piece published by Locus Magazine. Unfortunately it also contains a bunch of mistakes, such as about the consequences of money printing. I have chosen to ignore those because the good points are really worthwhile and shouldn’t be obscured.
-
2020-06-17
We need best practice templates for tech governance (just like we have a library of open-source licenses)
Ever learn things while being an invited “expert” on a panel at some conference? It just happened to me, again, at today’s Transparency By Design Summit.
We were discussing how to collect consumer healthcare data responsibly, for COVID-19 and beyond, and the challenges how to (justly) gain the trust of people whose data we would like to collect. Because if they don’t trust us, they won’t let us collect the data, or even poison what they give us. The core question is:
How do I know that you, the data collector/medical researcher/public health system, will indeed do what you promised? (About privacy, data retention, anonymization, sharing etc)
And the answer, as always, is “good governance”, followed by a bunch of hand waving: just what exactly does this mean? What is that thing called “good governance” of a system that includes a lot of technology and a lot of humans developing and operating that technology? Take a COVID-19 contact tracing app: there’s the code, and the release process, and the data sharing, and the employment agreements of people who touch the code or the data that hopefully will oblige them not do “bad things” and the legal enforcement and the audit trails and what have you. It’s not simple, and goes far beyond just “the code”.
First of all we have few examples where good governance is actually practiced. So we are not used to it. Worse, we have nothing resembling agreement on what that actually means, in detail. Just my example enumeration above is woefully lacking in detail.
It occurs to me it’s a bit like open-source licensing of code was about 20+ years ago, with everybody having their own software license (or none at all), many of which were homegrown and not very professional. Fortunately, the open-source world has since coalesced around a fairly small number of primary open-source licenses (like GPL, AGPL, Apache, MIT and a few more), which are fairly well understood.
We need the same thing for technology governance: a bunch of governance templates, which can be used by technology systems. They could, for example, include open-source licensing for their code component (but they don’t necessarily need to), but need to go far beyond, including questions such as:
- What is the data retention period?
- What’s the process to make sure the data is deleted after the data retention period?
- How do we find out whether the process is or isn’t being followed?
… and many other related questions. If we had such a series of templates, innovation in governance was still possible (just create another template) but we could collectively understand what governance looks like for a given system, and, for example, fix governance problems one bug at a time. Something not possible at all today.
It would go a long way towards us all regaining trust in technology. By public health systems pushing COVID apps just as much as Facebook pushing the latest “trust us, we won’t spy on you” update.
Anybody working on anything like that? Would love to hear about it.