-
2022-08-15
Levels of information architecture
I’ve been reading up on what is apparently called information architecture: the “structural design of shared information environments”.
A quite fascinating discipline, and sorely needed as the amount of information we need to interact with on a daily basis keeps growing.
I kind of think of it as “the structure behind the design”. If design is the what you see when looking at something, information architecture are the beams and struts and foundations etc that keeps the whole thing standing and comprehensible.
Based on what I’ve read so far, however, it can be a bit myopic in terms of focusing just on “what’s inside the app”. That’s most important, obviously, but insufficient in the age of IoT – where some of the “app” is actually controllable and observable through physical items – and the expected coming wave of AR applications. Even here and now many flows start with QR codes printed on walls or scanned from other people’s phones, and we miss something in the “design of shared information environments” if we don’t make those in-scope.
So I propose this outermost framework to help us think about how to interact with shared information environments:
- Universe-level:
- Focuses on where on the planet where a user could conceivably be, and how that changes how they interact with the shared information environment. For example, functionality may be different in different regions, use different languages or examples, or not be available at all.
- Environment-level:
- Focuses on the space in which the user is currently located (like sitting on their living room couch), or that they can easily reach, such as a bookshelf in the same room. Here we can have a discussion about, say, whether the user will pick up their Apple remote, run the virtual remote app on their iOS device, or walk over to the TV to turn up the volume.
- Device-level:
- Once the user has decided which device to use (e.g. their mobile phone, their PC, their AR goggles, a button on the wall etc), this level focuses on what they user does on the top level of that device. On a mobile phone or PC, that would be the operating-system level features such as which app to run (not the content of the app, that’s the next level down), or home screen widgets. Here we can discuss how the user interacts with the shared information space given that they also do other things on their device; how to get back and forth; integrations and so forth.
- App-level:
- The top-level structure inside an app: For example, an app might have 5 major tabs reflecting 5 different sets of features.
- Page-level:
- The structure of pages within an app. Do they have commonalities (such as all of them have a title at the top, or a toolbox to the right) and how are they structured.
- Mode-level:
- Some apps have “modes” that change how the user interacts with what it shown on a page. Most notably: drawing apps where the selected tool (like drawing a circle vs erasing) determines different interaction styles.
I’m just writing this down for my own purposes, because I don’t want to forget it and refer to it when thinking of design problems. And perhaps it is useful for you, the reader, as well. If you think it can be improved, let me know!
-
2022-08-07
An autonomous reputation system
Context: We never built an open reputation system for the internet. This was a mistake, and that’s one of the reasons why we have so much spam and fake news.
But now, as governance takes an ever-more prominent role in technology, such as for the ever-growing list of decentralized projects e.g. DAOs, we need to figure out how to give more power to “better” actors within a given community or context, and disempower or keep out the detractors and direct opponents. All without putting a centralized authority in place.
Proposal: Here is a quite simple, but as I think rather powerful proposal. We use an on-line discussion group as an example, but this is a generic protocol that should be applicable to many other applications that can use reputation scores of some kind.
-
Let’s call the participants in the reputation system Actors. As this is a decentralized, non-hierarchical system without a central power, there is only one class of Actor. In the discussion group example, each person participating in the discussion group is an Actor.
An Actor is a person, or an account, or a bot, or anything really that has some ability to act, and that can be uniquely identified with an identifier of some kind within the system. No connection to the “real” world is necessary, and it could be as simple as a public key. There is no need for proving that each Actor is a distinct person, or that a person controls only one Actor. In our example, all discussion group user names identify Actors.
-
The reputation system manages two numbers for each Actor, called the Reputation Score S, and the Rating Tokens Balance R. It does this in a way that it is impossible for those numbers to be changed outside of this protocol.
For example, these numbers could be managed by a smart contract on a blockchain which cannot be modified except through the outlined protocol.
-
The Reputation Score S is the current reputation of some Actor A, with respect to some subject. In the example discussion group, S might express the quality of content that A is contributing to the group.
If there is more than one reputation subject we care about, there will be an instance of the reputation system for each subject, even if it covers the same Actors. In the discussion group example, the reputation of contributing good primary content might be different from reputation for resolving heated disputes, for example, and would be tracked in a separate instance of the reputation system.
-
The Reputation Score S of any Actor automatically decreases over time. This means that Actors have a lower reputation if they were rated highly in the past, than if they were rated highly recently.
There’s a parameter in the system, let’s call it αS, which reflects S’s rate of decay, such as 1% per month.
-
Actors rate each other, which means that they take actions, as a result of which the Reputation Score of another Actor changes. Actors cannot rate themselves.
It is out of scope for this proposal to discuss what specifically might cause an Actor to decide to rate another, and how. This tends to be specific to the community. For example, in a discussion group, ratings might often happen if somebody reads newly posted content and reacts to it; but it could also happen if somebody does not post new content because the community values community members who exercise restraint.
-
The Rating Tokens Balance R is the set of tokens an Actor A currently has at their disposal to rate other Actors. Each rating that A performs decreases their Rating Tokens Balance R, and increases the Reputation Score S of the rated Actor by the same amount.
-
Every Actor’s Rating Tokens Balance R gets replenished on a regular basis, such as monthly. The regular increase in R is proportional to the Actor’s current Reputation Score S.
In other words, Actors with high reputation have a high ability to rate other Actors. Actors with a low reputation, or zero reputation, have little or no ability to rate other Actors. This is a key security feature inhibiting the ability for bad actors to take over.
-
The Rating Token Balance R is capped to some maximum value Rmax, which is a percentage of the current reputation of the Actor.
This prevents passive accumulation of rating tokens that then could be unleashed all at once.
-
The overall number of new Ratings Tokens that is injected into the system on a regular basis as replenishment is determined as a function of the desired average Reputation Score of Actors in the system. This enables Actors’ average Reputation Scores to be relatively constant over time, even as individual reputations increase and decrease, and Actors join and leave the system.
For example, if the desired average Reputation Score is 100 in a system with 1000 Actors, if the monthly decay reduced the sum of all Reputation Scores by 1000, 10 new Actors joined over the month, and 1000 Rating Tokens were eliminated because of the cap, 3000 new Rating Tokens (or something like that, my math may be off – sorry) would be distributed, proportional to their then-current Reputation Scores, to all Actors.
-
Optionally, the system may allow downvotes. In this case, the rater’s Rating Token Balance still decreases by the number of Rating Tokens spent, while the rated Actor’s Reputation also decreases. Downvotes may be more expensive than upvotes.
There appears to be a dispute among reputation experts on whether downvotes are a good idea, or not. Some online services support them, some don’t, and I assume for good reasons that depend on the nature of the community and the subject. Here, we can model this simply by introducing another coefficient between 0 and 1, which reflects the decrease of reputation of the downvoted Actor given the number of Rating Tokens spent by the downvoting Actor. In case of 1, upvotes cost the same as downvotes; in case of 0, no amount of downvotes can actually reduce somebody’s score.
To bootstrap the system, an initial set of Actors who share the same core values about the to-be-created reputation each gets allocated a bootstrap Reputation Score. This gives them the ability to receive Rating Tokens with which they can rate each other and newly entering Actors.
Some observations:
-
Once set up, this system can run autonomously. No oversight is required, other than perhaps adjusting some of the numeric parameters before enough experience is gained what those parameters should be in a real-world operation.
-
Bad Actors cannot take over the system until they have played by the rules long enough to have accumulated sufficiently high reputation scores. Note they can only acquire reputation by being good Actors in the eyes of already-good Actors. So in this respect this system favors the status quo and community consensus over facilitating revolution, which is probably desirable: we don’t want a reputation score for “verified truth” to be easily hijackable by “fake news”, for example.
-
Anybody creating many accounts aka Actors has only very limited ability to increase the total reputation they control across all of their Actors.
-
This system appears to be generally-applicable. We discussed the example of rating “good” contributions to a discussion group, but it appears this could also be applied to things such as “good governance”, where Actors rate higher who consistently perform activities others believe are good for governance; their governance reputation score could then be used to get them more votes in governance votes (such as to adjust the free numeric parameters, or other governance activities of the community).
Known issues:
-
This system does not distinguish reputation on the desired value (like posting good content) vs reputation in rating other Actors (e.g. the difference between driving a car well, and being able to judge others' driving ability, such as needed for driving instructors. I can imagine that there are some bad drivers who are good at judging others’ driving abilities, and vice versa). This could probably be solved with two instances of the system that are suitable connected (details tbd).
-
There is no privacy in this system. (This may be a feature or a problem depending on where it is applied.) Everybody can see everybody else’s Reputation Score, and who rated them how.
-
If implemented on a typical blockchain, the financial incentives are backwards: it would cost to rate somebody (a modifying operation to the blockchain) but it would be free to obtain somebody’s score (a read-only operation, which is typically free). However, rating somebody does not create immediate benefit, while having access to ratings does. So a smart contract would have to be suitably wrapped to present the right incentive structure.
I would love your feedback.
This proposal probably should have a name. Because it can run autonomously, I’m going to call it Autorep. And this is version 0.5. I’ll create new versions when needed.
-
-
-
-
-
-
-
-
-
-
-
2022-07-27
Is this the end of social networking?
Scott Rosenberg, in a piece with the title “Sunset of the social network”, writes at Axios:
Mark last week as the end of the social networking era, which began with the rise of Friendster in 2003, shaped two decades of internet growth, and now closes with Facebook’s rollout of a sweeping TikTok-like redesign.
A sweeping statement. But I think he’s right:
Facebook is fundamentally an advertising machine. Like other Meta products are. There aren’t really about “technologies that bring the world closer together”, as the Meta homepage has it. At least not primarily.
This advertising machine has been amazingly successful, leading to a recent quarterly revenue of over $50 per user in North America (source). And Meta certainly has driven this hard, otherwise it would not have been in the news for overstepping the consent of its users year after year, scandal after scandal.
But now a better advertising machine is in town: TikTok. This new advertising machine is powered not by friends and family, but by an addiction algorithm. This addiction algorithm figures out your points of least resistance, and pours down one advertisement after another down your throat. And as soon as you have swalled one more, you scroll a bit more, and by doing so, you are asking for more advertisements, because of the addiction. This addiction-based advertising machine is probably close to the theoretical maximum of how many advertisements one can pour down somebody’s throat. An amazing work of art, as an engineer I have to admire it. (Of course that admiration quickly changes into some other emotion of the disgusting sort, if you have any kind of morals.)
So Facebook adjusts, and transitions into another addiction-based advertising machine. Which does not really surprise anybody I would think.
And because it was never about “bring[ing] the world closer together”, they drop that mission as if they never cared. (That’s because they didn’t. At least MarkZ didn’t, and he is the sole, unaccountable overlord of the Meta empire. A two-class stock structure gives you that.)
With the giant putting their attention elsewhere, where does this leave social networking? Because the needs and the wants to “bring[ing] the world closer together”, and to catch up with friends and family are still there.
I think it leaves social networking, or what will replace it, in a much better place. What about this time around we build products whose primary focus is actually the stated mission? Share with friends and family and the world, to bring it together (not divide it)! Instead of something unrelated, like making lots of ad revenue! What a concept!
Imagine what social networking could be!! The best days of social networking are still ahead. Now that the pretenders are leaving, we can actually start solving the problem. Social networking is dead. Long live what will emerge from the ashes. It might not be called social networking, but it will be, just better.
-
2022-07-26
A list of (supposed) web3 benefits
I’ve been collecting a list of the supposed benefits of web3, to understand how the term is used these days. Might as well post what I found:
- better, fairer internet
- wrest back power from a small number of centralized institutions
- participate on a level playing field
- control what data a platform receives
- all data (incl. identities) is self-sovereign and secure
- high-quality information flows
- creators benefit
- reduced inefficiencies
- fewer intermediaries
- transparency
- personalization
- better marketing
- capture value from virtual items
- no censorship (content, finance etc)
- democratized content creation
- crypto-verified information correctness
- privacy
- decentralization
- composability
- collaboration
- human-centered
- permissionless
Some of this is clearly aspirational, perhaps on the other side of likely. Also not exactly what I would say if asked. But nevertheless an interesting list.
-
2022-07-26
The shortest definition of Web3
- web1: read
- web2: read + write
- web3: read + write + own
Found here, but probably lots of other places, too.
-
2022-07-03
What is a DAO? A non-technical definition
Definitions of “DAO” (short for Decentralized Autonomous Organization) usually start with technology, specifically blockchain. But I think that actually misses much of what’s exciting about DAOs, a bit like if you were to explain why your smartphone is great by talking about semiconductor circuits. Let’s try to define DAO without starting with blockchain.
For me:
A DAO is…
- a distributed group
- with a common cause of consequence
- that governs itself,
- does not have a single point of failure,
- and that is digital-native.
Let’s unpack this:
-
A group: a DAO is a form of organization. It is usually a group of people, but it could also be a group of organizations, a group of other DAOs (yes!) or any combination.
-
This group is distributed: the group members are not all sitting around the same conference table, and may never. The members of many DAOs have not met in person, and often never will. From the get-go, DAO members may come from around the globe. A common jurisdiction cannot be assumed, and as DAO membership changes, over time it may be that most members eventually come from a very different geography than where the DAO started.
-
With a common cause: DAOs are organized around a common cause, or mission, like “save the whales” or “invest in real-estate together”. Lots of different causes are possible, covering most areas of human interest, including “doing good”, “not for profit” or “for profit”.
-
This cause is of consequence to the members, and members are invested in the group. Because of that, members will not easily abandon the group. So we are not talking about informal pop-in-and-out-groups where maybe people have a good time but don’t really care whether the group is successful, but something where success of the group is important to the members and they will work on making the group successful.
-
That governs itself: it’s not a group that is subservient to somebody or some other organization or some other ruleset. Instead, the members of the DAO together make the rules, including how to change the rules. They do not depend on anybody outside of the DAO for that (unless, of course, they decide to do that). While some DAOs might identify specific members with specific roles, a DAO is much closer to direct democracy than representative democracy (e.g. as in traditional organization where shareholders elect directors who then appoint officers who then run things).
-
That does not have a single point of failure and are generally resilient. No single point of failure should occur in terms of people who are “essential” and cannot be replaced, or tools (like specific websites). This often is described in a DAO context as “sufficient decentralization”.
-
And that is digital-native: a DAO usually starts on-line as a discussion group, and over time, as its cause, membership and governance become more defined, gradually turns into a DAO. At all stages members prefer digital tools and digital interactions over traditional tools and interactions. For example, instead of having an annual membership meeting at a certain place and time, they will meet online. Instead of filling out paper ballots, they will vote electronically, e.g. on a blockchain. (This is where having a blockchain is convenient, but there are certainly other technical ways voting could be performed.)
Sounds … very broad? It is! For me, that’s one of the exciting things about DAOs. They come with very little up-front structure, so the members can decide what and how they want to do things. And if they change their minds, they change their minds and can do that any time, collectively, democratically!
Of course, all this freedom means more work because a lot of defaults fall away and need to be defined. Governance can fail in new and unexpected ways because we don’t have hundreds of years of precedent in how, say, Delaware corporations work.
As an inventor and innovator, I’m perfectly fine with that. The things I tend to invent – in technology – are also new and fail in unexpected ways. Of course, there is many situations where that would be unacceptable: when operating a nuclear power plant, for example. So DAOs definitely aren’t for everyone and everything. But where existing structure of governance are found to be lacking, here is a new canvas for you!