[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [DotGNU]GNU.RDF update 29-3-2003
From: |
James Michael DuPont |
Subject: |
Re: [DotGNU]GNU.RDF update 29-3-2003 |
Date: |
Mon, 7 Apr 2003 13:14:07 -0700 (PDT) |
--- Peter Minten <address@hidden> wrote:
> Chris Smith wrote:
> > Data capture and searching must be just that. Two independant
> steps:
> > 1. discover whats out there and store that information.
>
> This is a matter of metanodes. Metanodes are RDF servers that only
> contain
> information about which external resource is backlinked to what other
> external
> resource. Say for example I put up RDF data about how bad my
> government is doing
> and have a link to their website, I can't expect them to link to my
> data so it
> will be hard to find. If I register the link from my data to the
> government site
> at a metanode then it will become more easy to find.
This is interesting!
> In my vision the metanodes are VRS servers that spend some of their
> power on
> searching the Semantic Web using automatic update notification
> (something which
> can be implemented as a agent and is the basis of news feeds in
> GNU.RDF).
cool.
>
> > .... if however, the called DGEE discovery service is supposed to
> go off and
> > search the web for stuff, then that's a different matter all
> together - and I
> > don't know how (regardless of the DGEE) that would be achieved
> successfully
> > and in a timely fashion anyway.
>
> Me neither, this is the Linking Problem again. If there is enough
> metadata you
> could simply walk down a path with no uncertaincies, one node would
> simply point
> to the next, this could be done pretty fast with a binary protocol
> (that's why I
> like the idea of a binary protocol). However if you need to search
> for something
> that does not have a direct path going to it the search boils down to
> a complete
> search of all the RDF servers. Of course it would be possible to use
> some
> techniques to reduce the search time but even then it would be hard.
>
> One promissing solution to the linking problem is the VRS. The VRS
> could host
> gigantic databases without one person having total control, if enough
> people
> participate in one or a few mega servers the linking problem would
> become much
> more solvable. This means the communication inside the VRS must be
> very very
> fast though. Btw, I'm not just meaning the metanode information, but
> also normal
> information.
You can also build servers that are responsable for storing links about
a topic. For example, if I write software, I would like to have a fast
server that stores the metadata about that software somewhere.
> All in all my problem boils down to this: the Linking Problem becomes
> more of a
> problem when there are more servers in the Semantic Web, the less
> servers there
> are though the more power the server owners have and that's bad.
Look at napster/gnutella, many many servers.
>
> Still the Linking Problem stays pesky, and I'm beginning to believe
> it called it
> off over myself because of the URI based resolver system that
> determines the
> server a resource is on based on the URI. This makes it hard to proxy
> things.
> It's the fastest resolver I can think of though, all the others are a
> whole lot
> slower.
Hmm, I think a routing system is needed.
>
> I think I'll have to create an GNU.RDF.RFC system to work out all
> these
> solutions to the Linking Problem ;-).
Bring it on!
Mike
=====
James Michael DuPont
http://introspector.sourceforge.net/
__________________________________________________
Do you Yahoo!?
Yahoo! Tax Center - File online, calculators, forms, and more
http://tax.yahoo.com