Published in Locus Magazine. Here are some selective quotes with comments.
I am an AI skeptic. I am baffled by anyone who isn’t.
I don’t see any path from continuous improvements to the (admittedly impressive) “machine learning” field that leads to a general AI any more than I can see a path from continuous improvements in horse-breeding that leads to an internal combustion engine.
Yep. Let me add that everybody I have encountered who is hyping a future AI nirvana has something to sell. So let’s treat it like the claims of any salesman who sells miracle cures.
Remediating climate change will involve unimaginably labor-intensive tasks, like relocating every coastal city in the world kilometers inland, building high-speed rail links to replace aviation links, caring for hundreds of millions of traumatized, displaced people, and treating runaway zoontoic and insectborne pandemics.
These tasks will absorb more than 100% of any labor freed up by automation
Putting aside whether we can “remediate climate change” (given where we are, IMHO we can now at best hope to adapt to it somehow), he’s absolutely correct that whatever we attempt to do, is going to be immensely labor-intensive. And because it’s all new and hasn’t been done before, it cannot really be automated, as automation is fundamentlly about having machines repeat the same thing over and over again. (If you agree with him, as I do, that a general AI is a looong time away and certainly will not arrive in time to solve this crisis for us; it should have arrived 30-50 years ago then.)
… the locus of the problem with technological unemployment: it’s not a technological problem at all, it’s an economic one.
Exactly. If technology suddenly took the jobs of gazillions of people, it’s not like there is nothing that the world needs to get done any more. Look around you: there are tons of things that should get done, from sweeping the street you live on more often to spending quality time with foster children or teaching people online the basics of science so they won’t fall for as many hoaxes.
Our current economic system always has had this very baffling feature of sending people home to watch TV when the economy tanks, instead of making us all work much harder to get the economy back out of the hole it is in! It’s the opposite of what should happen!
And as Cory says, the reason why this happens is because private sector employment is correlated with economic success. Not with the number and size of the problems to be solved. This is particularly important because many believe, myself included, that any realistic attempt to deal with the crisis will have to accept some form of economic de-growth, aka shrinkage. While creating a lot of work.
when the pandemic crisis is over, 30% of the world will either be unemployed or working for governments.
Very possibly so. But unlike what Cory implies, there will also be a significant (30%? Similar to the unemployment rate?) downturn in income. That’s because if 30% of people don’t work, or work in jobs for which no business model exists, they don’t produce things that others will pay for, and so we are all commensurately poorer. Money printing may make this effect less visible in prices, but as he says, money printing does not produce the things we need or want.
Here’s the full piece published by Locus Magazine. Unfortunately it also contains a bunch of mistakes, such as about the consequences of money printing. I have chosen to ignore those because the good points are really worthwhile and shouldn’t be obscured.
We need best practice templates for tech governance (just like we have a library of open-source licenses)
Ever learn things while being an invited “expert” on a panel at some conference? It just happened to me, again, at today’s Transparency By Design Summit.
We were discussing how to collect consumer healthcare data responsibly, for COVID-19 and beyond, and the challenges how to (justly) gain the trust of people whose data we would like to collect. Because if they don’t trust us, they won’t let us collect the data, or even poison what they give us. The core question is:
How do I know that you, the data collector/medical researcher/public health system, will indeed do what you promised? (About privacy, data retention, anonymization, sharing etc)
And the answer, as always, is “good governance”, followed by a bunch of hand waving: just what exactly does this mean? What is that thing called “good governance” of a system that includes a lot of technology and a lot of humans developing and operating that technology? Take a COVID-19 contact tracing app: there’s the code, and the release process, and the data sharing, and the employment agreements of people who touch the code or the data that hopefully will oblige them not do “bad things” and the legal enforcement and the audit trails and what have you. It’s not simple, and goes far beyond just “the code”.
First of all we have few examples where good governance is actually practiced. So we are not used to it. Worse, we have nothing resembling agreement on what that actually means, in detail. Just my example enumeration above is woefully lacking in detail.
It occurs to me it’s a bit like open-source licensing of code was about 20+ years ago, with everybody having their own software license (or none at all), many of which were homegrown and not very professional. Fortunately, the open-source world has since coalesced around a fairly small number of primary open-source licenses (like GPL, AGPL, Apache, MIT and a few more), which are fairly well understood.
We need the same thing for technology governance: a bunch of governance templates, which can be used by technology systems. They could, for example, include open-source licensing for their code component (but they don’t necessarily need to), but need to go far beyond, including questions such as:
- What is the data retention period?
- What’s the process to make sure the data is deleted after the data retention period?
- How do we find out whether the process is or isn’t being followed?
… and many other related questions. If we had such a series of templates, innovation in governance was still possible (just create another template) but we could collectively understand what governance looks like for a given system, and, for example, fix governance problems one bug at a time. Something not possible at all today.
It would go a long way towards us all regaining trust in technology. By public health systems pushing COVID apps just as much as Facebook pushing the latest “trust us, we won’t spy on you” update.
Anybody working on anything like that? Would love to hear about it.
What do you need to know so you can confidently trust a piece of technology, such as an app supposedly helping fight COVID-19?
That question is at the heart of Project App Assay. It applies to all technology, but is particularly important for the COVID-19 apps, because many of them collect so much information about our health, our friends, our locations and activities around the clock.
Here is a proposal.
First: the key questions that need answering, I think, are:
Is the app effective? If it is not effective in what it does, such as help fight the virus, there is no point, and you should not trust it to help with your life or the lives of your fellow people. Specifically:
Does it do what it says it does and is it good at it? E.g. if it says it tracks contacts via Bluetooth, does it do that and do it well (and nothing else)?
Does that help with the virus? E.g. if the app provides medical advice, it would be pointless if the advice it dispensed made no difference to your health or the health of the people around you.
What are the downsides of me using the app? These range from the mundane, like will it drain my phone’s battery quickly, to the profound: e.g. will the people promoting the app use the collected personal data for purposes other than fighting the virus? Perhaps even use it against me now or at some point in the future, e.g. by jacking up insurance rates or finding other members of my persecuted religious minority?
These are critical questions we all ask ourselves when faced with the decision to use or not use an app.
As we analyze COVID-19 apps at Project App Assay, we have observed that the authors of those apps make many claims about their apps answering these questions, but that’s all they are: claims by the creators of the app who obviously have a self-interest. Can those claims be trusted? Clearly, it would be nice if we had more to go on.
So I have come up with the following rating scheme. It looks like this:
Effectiveness Technology Operations Governance
Let me explain:
- Effectiveness: what do we know about whether the app is effective? This includes whether its advertised features work, and what we know about whether it indeed helps and pushes back the virus.
- Technology: what do we know about the technology, including algorithms, which data is collected, what protocols and cryptography does it use and the like?
- Operations: what do we know about how the deployed system is operated, e.g. how often are security reviews being performed, who has access to cryptographic secrets, or are systems administrators vetted?
- Governance: who makes decisions, and how are they made, about all aspects of the app and the data it generates? How is dissent handled on the governance team? (E.g. is there a whistleblower process?)
We then rate each dimension with the possible values of:
- Self-asserted, few details: the app creator provides no or few details on the subject; no third party has validated those claims.
- Self-asserted, comprehensive: the app creator provides comprehensive information on the subject; but no independent, credible third party has validated those claims.
- Comprehensively audited by an independent, credible third party: the claims have been validated by an independent, credible third party, and found to be largely correct with no major discrepancies.
- Follows best industry practices: the third-party validation confirms that the app follows best industry practices.
As an example, the evaluation of a simple hypothetical app (only) dispensing health advice that gained high marks might look like this:
Effectiveness Technology Operations Governance
This would be the evaluation for the health advice app, if, for example:
the health advice was sourced from respectable medical sources (e.g. CDC) with back links to the source, and had been reviewed for correctness by the CDC.
it was developed in the open, such as open-source, with a large and diverse developer community. If the developer community is large and diverse and functional, it effectively performs the audit function itself, and gravitates to following best technology practices.
for this app, operations are minimal and transparent, so this is a non-issue.
governance of the app was performed in the open, such as in public meetings or on public mailing lists.
On the contrary, the evaluation for a similar app with low marks could look like this:
Effectiveness Technology Operations Governance
This would happen if, for example:
the health advice had no discernable source, and no review had been performed by medical professionals.
the app was provided as a “black box” of which nothing is known other than what the developers claim about it, and they have publicly said little.
there is no knowledge about who is involved in operations or governance of the app, and what decisions are being made on an ongoing basis.
Of course, it is entirely possible that an app could receive low marks although it is effective and does not harm users in any way.
However, for a public health emergency like COVID-19, I can think of few good reasons why apps should keep their technology or governance secret. And as large-scale adoption by many users is required for most to be effective, I can think of few ways to better gain user trust than evaluations all to the right in this matrix.
I would love your feedback on Twitter.
It’s the week for the 30th (yes!) Internet Identity Workshop (IIW) and it just started. IIW, since the very first time, has been an unconference since the very first time, with the agenda driven by the participants, and conversation replacing presentations to a large extent.
Because of the pandemic, IIW has gone virtual using a website called qiqochat.com, which integrates collaboration tools such as Zoom, Google Docs and chat into a virtual venue for unconferences.
There is an opening circle, and there are breakout rooms. I’m fascinated about the potential of unconferences in cyberspace … it shows that crises often cause leaps in innovation.
So far, so very good! There have been over 200 people on video in the opening circle, seamless transition into much smaller breakout rooms, and collaborative document editing.
The Atlantic says COVID-19 is Stretching the International Order to Its Breaking Point.
Foreign policy magazine thinks The Coronavirus Is the Biggest Emerging Markets Crisis Ever.
The last time we had a pandemic, in 1918, the empires of Russia, Austria-Hungary and Germany ended (of course, there was also World War I, but it’s still an interesting coincidence).
The biggest pandemic in European history, the Black Death in the 14th century, is credited with ending feudalism.
We are thinking too small in terms of impact.