[PP-main] Technical/implementation matters
andrew at andrewcooke.free-online.co.uk
Fri Mar 3 17:05:01 CET 2000
At 08:38 AM 3/3/00 -0600, you wrote:
>On Thu, Mar 02, 2000 at 06:43:57PM -0500, Rusty Foster wrote:
>> I think http is the way to go for client<->server communication. It's
>> cheap, easy, and everyone running a website already has it implemented.
>HTTP could work, but it's really sub-optimal for this sort of thing. If I had
>to choose an existing protocol, I'd rather consider NNTP. But I don't see
>what's wrong with brewing our own. Of course, I have an ulterior motive:
>We're doing some software here that's specifically made for transferring
>pre-parsed tree data (for instance XML) and knows how to take XML as input,
>and output it on the other end, diff trees, and things like this. It's also
>handy because it's extremely easy to use (it's a C library, with very
>high-level functions). It also has crypted streams built in, without the need
>for SSL libraries or anything of the sort.
>I'm not a big fan of the trend lately to use HTTP for just about everything.
>It's not what HTTP was made for, and it's showing.
>My idea is to use the comm tool we're making at the bottom level, and then
>supply some tools that use it, which let you do stuff like poll the server
>for new messages, etc., command line. That's the most UNIXy and flexible way
>of doing it, I think. Opinions?
This is intended only as a constructive comment - I'm not involved in this
project, but I do write software for internet applications. I don't know
how much experience you have, so I might be talking down to you - apologies
in advance if you know more than me. Above all - you're doing the work and
it's better you do it your way than it doesn't get done at all. On the
SSL is complicated, but it's well understood, has a good free
implementation (OpenSSL) and it works. No offence, but I would be very
wary of a protocol designed by someone else that hadn't had the same level
of public scrutiny. SSL with certificates, a trusted CA, a good
implementation and knowledgable users *guarantees* not just privacy, but
that you are talking to who you expect. The "if" list is long, but I don't
know of anything that does better.
HTTP is everywhere. You don't need to worry about firewalls - if anything
goes through, you can bet that they let http through. The main problems
are a lack of state and that it's limited to a single request/response on a
single open connection. You can do better with persistent connections (but
have to measure content length before sending data) and cookies etc help
with state. But if you stick to simple http you know that things will
work. Perhaps you are only going to be transferring data between hosts run
by people who know what they are doing - in that case I guess it doesn't
matter that you stay simple. But have you considered that indviduals at
work may want to use software that connects to the system you are designing
to build, for example, personalized news. Maybe that example is way
off-line, but can you be *sure* that you aren't excluding many many users
who may want to access this data some time in the future and find that only
HTTP goes through firewalls.
One final comment - more personal opinion that the above - avoid new stuff
as much as possible. You are doing a lot of work. Getting something
working on existing protocols is probably quicker. Yes, it's interesting
designing better more efficient protocols, but no-one wants to document
them and no-one understands them. Efficiency is always over-rated.
More information about the Peerpress-main