[PP-main] My thoughts

Joakim Ziegler joakim at simplemente.net
Fri Mar 10 22:57:41 CET 2000


On Wed, Mar 08, 2000 at 10:50:23PM -0800, Raph Levien wrote:

> HTTP vs roll-your-own
> ---------------------

> I come down firmly on the side of HTTP here. I can easily believe that
> Flux has a number of real advantages. But HTTP is so universally
> supported by the servers that we're hoping to bring into the Peer
> Press network that we will certainly be implementing HTTP
> anyway. Having two protocols strikes me as needless complexity, when
> we know that HTTP will get the job done.

I support using HTTP, definitely. I'm not sure how having more than one
protocol available for use will increase complexity, given that we want to
decouple the storage system from the protocol, which I assume.

We want to use this as a testbed for Flux. Now, I'm sure there are no
problems with this, given that we support HTTP as well. In the future, Flux
is going to have some really nifty features that will probably lend
themselves extremely well to this sort of thing (along the lines of your
Athshe project), but at the moment, I agree totally that at least comitting
to this single thing would be foolish. We'll toy around with it in the
background, and jump out and make you drool when we're ready. :)

I suppose the way it'll work with HTTP is to send HTTP POST requests to
specify the data you want, or even use uploading to send larger chunks.
This'll probably work. In this case, I think the correct way to implement
would be as an Apache module, in C. The storage backend could be flat files,
or we could possibly use Flux for it, it has some very nice, and very fast,
embedded database functionality. But I'm not going to push Flux for that
specific purpose, I'm sure I've done enough marketing for a while already.


> XML wierdness
> -------------

[Snippage]

> There is one area of "advanced" XML usage I recommend we pay attention
> to, though, which is internationalization. This means UTF-8 encoding
> for Unicode, and language tagging (at least for non-English
> languages). I'm a fine one to speak, as Advogato currently bails on
> both of these issues. But I do feel that paying attention to them now
> will save much trouble later.

I totally agree. Exchanging data in UTF-8 from the very start will probably
also ensure that systems don't fall over the day the actually need it.


> Centralized vs distributed
> --------------------------

> I personally have a strong bias in favor of distributed systems
> without single points of vulnerability. But this is largely due to the
> fact that my PhD research topic has to do with a distributed PKI
> system.

> I can see advantages to both approaches. A centralized system is much
> easier to implement and deploy. However, I have some concerns. The
> centralized server will require high-quality administration, a lot of
> bandwidth, and probably some HA stuff. The HA stuff is difficult, and
> the rest is expensive.

> I think there is still a role for a central system, for example to
> maintain the list of all members of the network, their names and their
> locations. Managing a namespace (of names of services) centrally makes
> it a _lot_ easier to avoid collisions. However, I think the system
> should be designed so that if the central server is down, people can
> still read their news.

> It seems to me that this system gives us a global namespace without
> too much difficulty. The toplevel servers are globally and centrally
> maintained, while all content within a service is managed locally by
> that service.

This works for me. There's one thing I'd like to do, though, which will take
us beyond the level of simple RDF/RSS syndication, which is have the central
server download all the stuff from the different feeds, so that we could in
theory do push stuff later if we wanted, and also do centralized searching of
the entire feed, etc., etc. Where the information is actually downloaded from
when it's time to fetch the article/photo/whatever isn't that interesting, in
my opinion. The main point would be some central starting point to get a hold
of all of it.


> Syndication of identity
> -----------------------

This scheme sounds sane.


> Crossposting
> ------------

This too. As I've mentioned before, I think, I'm not that interested in the
discussion syndication, I believe a system for it should be in place, and
this sounds workable for all parties.


> Trust metrics and moderation
> ----------------------------

> Personally, I believe Peer Press should not mandate the use of a
> specific trust metric or moderation scheme. Sites should be free to
> use what's most appropriate for them. The role of Peer Press should be
> to foster communication between sites that want to share trust
> metadata.

> As I see it, there are two basic forms of metadata worth sharing. The
> first is an Advogato-like certificate stating that user A trusts user
> B. The second is that a user rates a specific piece of content. Both
> of these need to be parameterized a bit, to specify a rating or level,
> and perhaps to narrow the scope, such as rating interest within a
> particular topic.

> This way, a server can choose to import edges in its trust graph and
> treat them effectively the same as locally generated edges, or not.
> Local autonomy, global coordination.

How do you feel about an inter-site trust metric, though? So that site admins
can rate the quality and trustworthyness of each other's sites, and then have
the option of using autoposting without editorial control when the trust +
keyword matches factor goes above a certain level.


> The main challenge of this approach is that it requires client
> support. Having direct client support for the Peer Press protocols is
> probably something that makes sense - I (and others) believe it's
> possible to deliver a much more pleasant user experience that way,
> with latency hiding for the network protocols, more local metadata
> about which articles have been read, and so on. Such a client would
> also be able to manage participation in a number of sites more
> smoothly, handling the two-phase commit of the crossposting protocol
> automatically and also simply providing a consistent view to all the
> stories.

> Less sophisticated users might be interested in web servers acting as
> generic portals to the peer press network.

We should probably look at plugging this into existing client systems, like
newsreaders.

On the other hand, you shouldn't assume that the web and native clients are
the only target media. Others could be WAP/WML and other wireless delivery
methods, wire service protocols, and so on.

-- 
Joakim Ziegler - simplemente r&d director - joakim at simplemente.net
 FIX sysop - free software coder - FIDEL & Conglomerate developer
      http://www.avmaria.com/ - http://www.simplemente.net/





More information about the Peerpress-main mailing list