-
2020-06-06
Tim O’Reilly’s post-COVID-19 futures
-
2020-05-07
Trust through Transparency for Apps
What do you need to know so you can confidently trust a piece of technology, such as an app supposedly helping fight COVID-19?
That question is at the heart of Project App Assay. It applies to all technology, but is particularly important for the COVID-19 apps, because many of them collect so much information about our health, our friends, our locations and activities around the clock.
Here is a proposal.
First: the key questions that need answering, I think, are:
-
Is the app effective? If it is not effective in what it does, such as help fight the virus, there is no point, and you should not trust it to help with your life or the lives of your fellow people. Specifically:
-
Does it do what it says it does and is it good at it? E.g. if it says it tracks contacts via Bluetooth, does it do that and do it well (and nothing else)?
-
Does that help with the virus? E.g. if the app provides medical advice, it would be pointless if the advice it dispensed made no difference to your health or the health of the people around you.
-
-
What are the downsides of me using the app? These range from the mundane, like will it drain my phone’s battery quickly, to the profound: e.g. will the people promoting the app use the collected personal data for purposes other than fighting the virus? Perhaps even use it against me now or at some point in the future, e.g. by jacking up insurance rates or finding other members of my persecuted religious minority?
These are critical questions we all ask ourselves when faced with the decision to use or not use an app.
As we analyze COVID-19 apps at Project App Assay, we have observed that the authors of those apps make many claims about their apps answering these questions, but that’s all they are: claims by the creators of the app who obviously have a self-interest. Can those claims be trusted? Clearly, it would be nice if we had more to go on.
So I have come up with the following rating scheme. It looks like this:
Self-asserted,
few detailsSelf-asserted,
comprehensiveComprehensively
auditedDemonstrably uses
best practicesEffectiveness Technology Operations Governance Let me explain:
- Effectiveness: what do we know about whether the app is effective? This includes whether its advertised features work, and what we know about whether it indeed helps and pushes back the virus.
- Technology: what do we know about the technology, including algorithms, which data is collected, what protocols and cryptography does it use and the like?
- Operations: what do we know about how the deployed system is operated, e.g. how often are security reviews being performed, who has access to cryptographic secrets, or are systems administrators vetted?
- Governance: who makes decisions, and how are they made, about all aspects of the app and the data it generates? How is dissent handled on the governance team? (E.g. is there a whistleblower process?)
We then rate each dimension with the possible values of:
- Self-asserted, few details: the app creator provides no or few details on the subject; no third party has validated those claims.
- Self-asserted, comprehensive: the app creator provides comprehensive information on the subject; but no independent, credible third party has validated those claims.
- Comprehensively audited by an independent, credible third party: the claims have been validated by an independent, credible third party, and found to be largely correct with no major discrepancies.
- Follows best industry practices: the third-party validation confirms that the app follows best industry practices.
As an example, the evaluation of a simple hypothetical app (only) dispensing health advice that gained high marks might look like this:
Self-asserted,
few detailsSelf-asserted,
comprehensiveComprehensively
auditedDemonstrably uses
best practicesEffectiveness Technology Operations Governance This would be the evaluation for the health advice app, if, for example:
-
the health advice was sourced from respectable medical sources (e.g. CDC) with back links to the source, and had been reviewed for correctness by the CDC.
-
it was developed in the open, such as open-source, with a large and diverse developer community. If the developer community is large and diverse and functional, it effectively performs the audit function itself, and gravitates to following best technology practices.
-
for this app, operations are minimal and transparent, so this is a non-issue.
-
governance of the app was performed in the open, such as in public meetings or on public mailing lists.
On the contrary, the evaluation for a similar app with low marks could look like this:
Self-asserted,
few detailsSelf-asserted,
comprehensiveComprehensively
auditedDemonstrably uses
best practicesEffectiveness Technology Operations Governance This would happen if, for example:
-
the health advice had no discernable source, and no review had been performed by medical professionals.
-
the app was provided as a “black box” of which nothing is known other than what the developers claim about it, and they have publicly said little.
-
there is no knowledge about who is involved in operations or governance of the app, and what decisions are being made on an ongoing basis.
Of course, it is entirely possible that an app could receive low marks although it is effective and does not harm users in any way.
However, for a public health emergency like COVID-19, I can think of few good reasons why apps should keep their technology or governance secret. And as large-scale adoption by many users is required for most to be effective, I can think of few ways to better gain user trust than evaluations all to the right in this matrix.
I would love your feedback on Twitter.
-
-
2020-04-28
Virtual Unconference
It’s the week for the 30th (yes!) Internet Identity Workshop (IIW) and it just started. IIW, since the very first time, has been an unconference since the very first time, with the agenda driven by the participants, and conversation replacing presentations to a large extent.
Because of the pandemic, IIW has gone virtual using a website called qiqochat.com, which integrates collaboration tools such as Zoom, Google Docs and chat into a virtual venue for unconferences.
There is an opening circle, and there are breakout rooms. I’m fascinated about the potential of unconferences in cyberspace … it shows that crises often cause leaps in innovation.
So far, so very good! There have been over 200 people on video in the opening circle, seamless transition into much smaller breakout rooms, and collaborative document editing.
-
2020-04-27
We are likely underestimating the impact of COVID-19
The Atlantic says COVID-19 is Stretching the International Order to Its Breaking Point.
Reuters writes that IMF (International Monetary Fund) sees coronavirus-induced global downturn ‘way worse" than financial crisis.
Foreign policy magazine thinks The Coronavirus Is the Biggest Emerging Markets Crisis Ever.
The last time we had a pandemic, in 1918, the empires of Russia, Austria-Hungary and Germany ended (of course, there was also World War I, but it’s still an interesting coincidence).
The biggest pandemic in European history, the Black Death in the 14th century, is credited with ending feudalism.
We are thinking too small in terms of impact.
-
2020-04-08
How the Pandemic Will End
Slighly different take than mine.
-
2020-04-08
We’re not going back to normal
Opinion piece in MIT Technology Preview. I mostly agree, and would like to go beyond his point:
Some new things we are being forced to do and we’ll stop doing them as soon as we can. But we will also have had the opportunity to experience new ways of working and living that we did not have before, and some of them we’ll like and keep.
Many of which are excellent for the climate – commuting less, for example.