[PP-main] My thoughts
raph at acm.org
Thu Mar 9 07:50:23 CET 2000
For what it's worth, here are my thoughts on peerpress. I'm afraid
this is going to be a long post. I'm really busy now, so have to
schedule these things in bursts.
HTTP vs roll-your-own
I come down firmly on the side of HTTP here. I can easily believe that
Flux has a number of real advantages. But HTTP is so universally
supported by the servers that we're hoping to bring into the Peer
Press network that we will certainly be implementing HTTP
anyway. Having two protocols strikes me as needless complexity, when
we know that HTTP will get the job done.
Believe me, I've been in the position of having cool new technology
that I _knew_ was better, and being frustrated when people choose the
"tried and true" over it. But I've also been in the position of
depending on cutting-edge technology from someone else, and of being
the one that someone else is depending on. It's not pleasant.
The use of XML seems not to be very controversial. I suggest, though,
that we stick to vanilla XML without any of the cool add-ons that the
W3 delights in cooking up (XPath, XLink, XPointer, XML namespaces,
RDF, schemas, DOM, XSL, etc). I've looked at all these and feel that
the benefits derived don't quite justify the added complexity.
There is one area of "advanced" XML usage I recommend we pay attention
to, though, which is internationalization. This means UTF-8 encoding
for Unicode, and language tagging (at least for non-English
languages). I'm a fine one to speak, as Advogato currently bails on
both of these issues. But I do feel that paying attention to them now
will save much trouble later.
Centralized vs distributed
I personally have a strong bias in favor of distributed systems
without single points of vulnerability. But this is largely due to the
fact that my PhD research topic has to do with a distributed PKI
I can see advantages to both approaches. A centralized system is much
easier to implement and deploy. However, I have some concerns. The
centralized server will require high-quality administration, a lot of
bandwidth, and probably some HA stuff. The HA stuff is difficult, and
the rest is expensive.
I think there is still a role for a central system, for example to
maintain the list of all members of the network, their names and their
locations. Managing a namespace (of names of services) centrally makes
it a _lot_ easier to avoid collisions. However, I think the system
should be designed so that if the central server is down, people can
still read their news.
It seems to me that this system gives us a global namespace without
too much difficulty. The toplevel servers are globally and centrally
maintained, while all content within a service is managed locally by
This is not a recommendation for syntax, but I can easily imagine a
name in this namespace resembling 'Advogato: /article/28'. or
Syndication of identity
Here I propose a system for syndicating identity. I believe this
system is fairly simple, robust, and scalable.
Basically, I propose that people create multiple accounts on different
systems, but have the option of mutually linking them. Thus, on my
Advogato account, I say 'Cluedot: /user/raphlinus' is also me, and on
Cluedot I say 'Advogato: /user/raph' is also me. Because the usernames
are relative to the server, it's no problem for Cluedot: /user/raph to
be a different person than me.
Once identities are syndicated, a bunch of new possibilities open
up. For one, if I post somewhere else, then Advogato's trust metric
can "see" that I am trustworthy. Also, I might choose to set things up
so that my local server pulls all the content I'm interested in from
There are two existing, simple models for crossposting. In one, each
server has its own discussion forum, and there is no write access from
outside servers. Currently, read access is implemented in the form of
links, but I see no reason why the actual content can't move too. This
would allow better presentation.
The other model is Usenet, where all comments show up in all
servers. I think we can reject this out of hand, as it basically
Here's what I propose as a compromise. When beginning a new thread,
the poster gets to choose a subset of servers, subject to the
constraint that he has write permission on all of them. Further posts
to the thread are allowed only to people who also have write
permission on all of them.
To me, this encourages crossposting to be limited to people who really
do bridge multiple communities, while fostering local
discussion. Further, this method avoids the fragmentation you see when
different people apply different filters to read comments within a
The identity syndication above is obviously important to make this
How to determine write access? I suggest a two-phase commit. Assume
there is a single system acting on behalf of the poster. This could
either be the poster's system itself, assuming client support, or
could be a proxy server.
The client system first issues "pre-post" requests to the servers in
the subset. Each of these servers issues confirmation that the
pre-post was accepted. At this point, the client system issues "post"
requests to all servers. Each of the servers then checks for
confirmations of the pre-post from all the other servers (if
necessary, by opening a connection), then posts the story.
Trust metrics and moderation
Personally, I believe Peer Press should not mandate the use of a
specific trust metric or moderation scheme. Sites should be free to
use what's most appropriate for them. The role of Peer Press should be
to foster communication between sites that want to share trust
As I see it, there are two basic forms of metadata worth sharing. The
first is an Advogato-like certificate stating that user A trusts user
B. The second is that a user rates a specific piece of content. Both
of these need to be parameterized a bit, to specify a rating or level,
and perhaps to narrow the scope, such as rating interest within a
This way, a server can choose to import edges in its trust graph and
treat them effectively the same as locally generated edges, or not.
Local autonomy, global coordination.
My personal feeling is that the system should initially be deployed
without crypto, but that it can (and should) be added later.
A particularly interesting application of crypto is to authenticate
identities, for example when entering certificates. This also makes
importation of certificates much more robust, as the validity of the
cert would then be a public key operation. Frankly, I'd use GPG for
The main challenge of this approach is that it requires client
support. Having direct client support for the Peer Press protocols is
probably something that makes sense - I (and others) believe it's
possible to deliver a much more pleasant user experience that way,
with latency hiding for the network protocols, more local metadata
about which articles have been read, and so on. Such a client would
also be able to manage participation in a number of sites more
smoothly, handling the two-phase commit of the crossposting protocol
automatically and also simply providing a consistent view to all the
Less sophisticated users might be interested in web servers acting as
generic portals to the peer press network.
More information about the Peerpress-main