[PP-main] Technical/implementation matters
joakim at simplemente.net
Sun Mar 5 22:25:10 CET 2000
On Sun, Mar 05, 2000 at 09:05:55PM +0000, Andrew Cooke wrote:
> At 10:31 AM 3/3/00 -0600, you wrote:
> >On Fri, Mar 03, 2000 at 04:05:01PM +0000, Andrew Cooke wrote:
>>> [...] No offence, but I would be very
>>> wary of a protocol designed by someone else that hadn't had the same level
>>> of public scrutiny.
>>I agree that SSL is rather commonly used. However, the system we're making
>>works quite well, and is based on implementations that have been publicly
>>scrutinized for quite some time, and are also patent free, namely Twofish and
>>ElGamal, with the majority of code taken directly from GPG.
>>Take a look at our Flux library, http://projects.simplemente.net/flux/ if
> I was interested, but the links for documentation on Cryptography and
> Entropy didn't exist. You might be using recognised ciphers, but If you
> haven't yet documented the protocol I don't think that you can argue that
> it has been under much public scutiny. How do you deal with
> man-in-the-middle attacks using address spoofing, for example?
The only way it's possible to do: Using host signatures.
There's no "protocol" in Flux. Flux is designed to let you keep cryptography,
compression, and the like out of the protocol design.
Now, you might argue "why not use HTTP, then, that's a protocol", but it's
not really. That is, you want to use HTTP as a transport, much like Flux
would be a transport. HTTP as a protocol has no provisions for the things
we'd like to do on a protocol level, you'd have to graft it on somewhere, in
rather arbitrary ways.
For instance, let's say you want to let the member site upload the filter
settings they use. Ok, in Flux, you'd define that on a protocol level,
perhaps by wrapping the entire thing with a root node that has a certain
content, like the string "filter_parameters" or whatever.
Now, what would you do in HTTP? Well, you'd have to upload a file in a
certain format, use the file format as protocol, basically. The amount of
work and complexity is more or less the same. Now, to download stuff, for
Flux, you do much of the same, send a request. But sending a request using
HTTP, you need to send an HTTP request, obviously, because custom requests
(in the form of uploading a small file that contains the request, for
instance), don't work, since you have no state. That means you need to define
the request as URL encoded data in a GET or POST, which is a very flat,
There's another major advantage to using Flux for transferring XML data. The
XML data gets parsed and validated before transfer. In other words, you can
set up the system to refuse malformed and/or/invalid XML before it's even
transferred to the server, which will cut down on the error rate and
bandwidth use, and make it a lot easier for the site which is pushing the
HTML to see what it's doing wrong.
On a different note, there's been some discussion back and forth about the
single centralized server thing I advocated. If this is really seen as a
problem, why not use a small set of servers, with a single level of
indirection: A client site pushed its data to the server it chooses. Any
server receiving data from a client site, pushes that data to all other
servers. And client sites get the data from the server of their choice. This
doesn't scale to enormous amounts of servers, but it's very fault-tolerant. I
don't think we're ever going to need a large number of servers, given that
the system isn't aimed at the general public connecting directly. If it ever
needs scaling, we can reimplement the system, without changing how it looks
from the outside for the client sites. Right now, though, this is an
implementation that's really easy to do, and will scale at least to 10
Joakim Ziegler - simplemente r&d director - joakim at simplemente.net
FIX sysop - free software coder - FIDEL & Conglomerate developer
http://www.avmaria.com/ - http://www.simplemente.net/
More information about the Peerpress-main