In fact in BlogEd I do have objects that are properties. These are more widely known as Maps. RelationMap takes a URI id in its factory constructor and has methods that should remind one of the java Map interface - indeed it used to implement the Map interface until recently. I used this a lot in the MetaWeblogPublisher class, which keeps tracks of a map between the Atom and RSS2 versions of an entry.
RelationMap local2remote() { if (local2remote == null) { local2remote = blog.factory.getRelationMap(relationURI); local2remote.addSuperRelation(relationTypeURI, false); } return local2remote; }
One should therefore easily be able to imagine doing PUT, GET, PATCH operations on a Relation URL. For relations that support this a GET on its URL would I suppose return all the triples for which that relation held. Of course one had better have a relation that was somehow restricted or else the number of results returned would be huge. So if there were a http://example.net/ontology/HenryKnows relation that was a sub relation of foaf:knows, then a GET to that URL could return a list of all the triples having me as a subject and the people I know as objects. Because the object of ex:HenryKnows all have the same subject, it seems more appropriate to find those relations on the foaf:Person uri that represents me. But there will clearly be cases where the relation itself is more constant than the objects that it relates. We have these a lot in java (Maps), so I reckon we will also find them a lot on the Semanic Web.
Speaking about this I am very dissapointed that Google Earth has a Microsoft Windows only client. What a waste of time on their part! With a Java version they could have covered all the OSes including OSX and Linux. Are they trying to solidify the Microsoft monopoly? You think that java is too slow for 3D graphics? Try these out:
My experience with Swimming pools in London a couple of years ago was atrocious. There was I believe only one 50 meter olympic swimming pool in the whole region (7 million). Most of the other pools were either too small (20m or less!) or too shallow, or just way too dirty, and usually a combination of all 3 factors. Each of Berkley, Stanford or San Francisco had better facilities in my opinion that the whole of London. For some reason even private facilities (except for those built by Virgin's Branson (the only man with any common sense in England?)) seemed to purposefully have built pools that were not fit for excercising in. Sports after all was something that was meant to be, in the old Britain, something to be too competitive about.
So the Olympics in London can only be a good thing. Hopefully this focus on sports will help the Capital (and the nation) rebuild the required infrastructure, to give those that want to go to the great event a chance to get there (as my brother Alexander did in rowing). It is true that London is really a world city in that every nation has ample representatives there. But the public infrastructure often feels like it has not been updated in half a century. This is the time to do something about it.
Benjamin Carlyle captures many of the things I discovered whilst using RDF in my implementation of BlogEd 0.7, but also goes further by empasizing the RESTful aspect which I have not yet had time to integrate.
The relation between Object Orientation, RDF and Java Beans is built into the core of BlogEd now. In BlogEd I can create an Ontology simply by anotating Java Bean like interfaces and their properties with URLs as for example in the AtomEntry interface. This gives me a simple map between the OO world and the RDF world. I can use these classes to create OWL ontologies mechanically (I will be writing the package to do this automatically next). So the emphasis on relationships, OO and universal naming is absolutely correct IMHO. If you understand Object Orientation you are one small step from understanding OWL, if you understand OWL you are one small step from being able to apply this to programming in Java.
Mark Baker points out that the relationship between REST and OO though is not so much to be found at the level of properties. That this mistake can easily be made is understandable since in RDF properties are themselves resources.
Nope, properties are properties, resources are the Beans themselves.
In my BlogEd framework this is also very easy to see. To create an object with a specific URI I can simply do the following (see MetaWeblogPublisher.getBlogScheme())
AtomScheme localHandle =
(AtomScheme)factory.createObject(remoteUri,AtomScheme.class);
This returns me a Java Bean like object whose java id is the localHandle
pointer and whose web id is the URI passed as first argument remoteUri
.
These are essentially equivalent. I can then set or get properties on
that object.
What Benjamin gets right is of course that by changing the state of that
JavaBean one should also be changing the state of the remote resource.
By adding a new relationship one should be sending a PUT to the remote
resource if not immediately then during some commit phase. During
initialisation of the object one could also do a GET on the remote
resource. If one wanted to create a new object one should then also send
a POST to the remote server. And of coure the DELETE method is self
explanatory. So at all points in this interaction the object id resource remoteUri
is the key to our interaction with the remote object. It is the one that
gives us the ability to change the remote state, create a remote object,
deltete it, etc...
BlogEd currently does not have this RESTful integration. When I create an object BlogEd only looks at its local database to find the information. When saving state BlogEd saves it to the database but does not update a remote resource. But if we think of the database as a local cache then we may be able to think of this as a delayed update. After all REST is designed to work with caches.
The relationship between OO state and REST has become clearer to me over the last year. Up till now I have attacked the problem from an atemporal perspetive in my database. REST on the other hand is a temporal API: PUT, DELETE, GET etc, are always to be done now. In RDF this is the creation of defeasible relationships. When you change the state of an Entry you are both adding a new state in the temporal sequence of things, and changing the relationship of 'being current' from one entry state to the next. This is like changing the value of a variable in an object. The variable is, like a resource, always the same (atemporally it is always pointing at the same thing) but it's temporal states keep changing. c++! The remote handle is always the same, but its states keep changing.
So the most important aspect of RESTful programming as Benjamin points out is that now all objects are universally nameable and universally accessible. A few simple verbs allow us to create, delete and change the state of these resources. Access Control Lists allow fine tuning of responsibilities for a resource. And OWL gives us the Object Oriented conceptual structure to predict and understand the content of these resources and how they relate to others.
In summary: if the Network is the Computer, the Network + REST + OWL is a Distributed Object Oriented Database.
Ps. I keep changing the tag line above. Trying to get it right.
Let's hope the themes of Community and Sharing, launched so well recently by the Sun marketing campaign does not get similarly abused. I'd hate to think what "community chewing gum" is going to taste like :-)
Tears of joy sprang to my eyes seeing this thought so clear and so far ahead of its time (400BC).The universe is infinite because it has not been produced by a creator. The causes of what now exists had no beginning.
There is an infinite number of worlds of different sizes: some are larger than ours, some have no sun or moon, others have suns or moons that are bigger than ours. Some have many suns and moons. Worlds are spaced at differing distances from each other; in some parts of the universe there are more worlds, in other parts fewer. In some areas they are growing, in other parts, decreasing. They are destroyed by collision with one another. There are some worlds with no living creatures, plants, or moisture.
For a client such as BlogEd which is designed to allow you to write a number of entries while off line, keep a history of your changes, and publish your entries in multiple places, the MetaWeblog API has to be coerced to do a job it was not really designed to do. It does it, but only screaming and shouting. Hopefully the upcoming Atom Publishing Protocol will give us something we can really work with, and give the author more control over his creations
It also led me to discover gaming on the internet. Around 1993 there
were a number of clients using the Internet
Go Server (IGS) protocol, to allow people with any computer or
operating system around the Internet to play the game. At the time I had
access to Sun computers at the University of Westminster and so was
introduced to the pleasure of compiling the various clients available
for X-Windows. It was, when it worked, relatively easy - I could do it -
but clearly this was way too complicated for most people. The general
public would need easily accessible binaries that just worked, but with
all the version of unix running on so many different cpu architectures,
it just was not possible for the developers to test their programs on
each platform. So they had to release the source code. This allowed them
to increase the market size, but reduced the economic incentive a
little, and most importantly reduced the market a lot by requiring
higher level skills from the end user: even the requirement to choose
the right binary for the user's operating system may be difficult for
many user's to get right.
I worried that the platform
with the most users would inevitably win, as developers would find it
more profitable to create tools for that platform. Windows (still in
version 3.1 at the time) was clearly in a massive lead when counting
number of users. This formed the basis of a very strong argument I had
at Imperial College in 1995, where I showed a few students there who
argued that it was easy to program in C, just how complicated compiling
a simple program like cgoban could be. They tried to argue that this
just showed that the programmers were bad. I just argued back that this
was a very bad omen for Unix. Something was needed that allowed programs
to just compile themselves on the fly on a computer without any
user interaction. Even better if the program would run on any computer,
as this would resolve the marketing problem of platform size. In short:
one needed a way to write programs that ran on all computers, allowed
proprietary development in order to allow capitalistic market incentives
to work, and was easy enough to program in, in order to maximise the
pool of developers working with it. A few months later the alpha version
of Java appeared on the scene. It was clear that this was going to be
huge, as it solved exactly all the above problems.
Java developed (improving in leaps and bounds with each iteration) and
William Shubert, the cgoban
author, rewrote his client and tuned it with his Kiseido
Go Server. As he is now able to easily control the client and the
server he has been able to add a lot more functionality than is
available on the original IGS. On KGS one can easily analyse a game with
one's opponent, look at variations that might have changed the outcome,
whilst chatting about these. And it always works for all users currently
playing on kgs. This makes a huge difference to the level of social
interaction between the participants. As a result it is a lot
friendlier to play on kgs than on igs.
Presumably igs
could develop it's protocol to follow suite, but they would have to
update all their clients simultaneously too, which would be a lot more
work - not to say impossible. One may think that updating the clients
asynchronously over time would be a possible strategy, but that is
without taking into account the misunderstandings that this would
introduce between the players using the different clients. The users of
the more advanced software would have to understand that the reason some
participants don't respond to some types of requests or don't respond in
as friendly a manner, is because they don't have clients with reduced
functionality, or because their client software does not present the
information to them the same way. The more likely outcome is that the
participants will aquire very defensive social interaction habits. And
habits are difficult to loose, especially when re-inforced by group
feeback mechanisms.
So kgs is an interesting example of a closed but cross platform solution doing better than an open cross platform solution. (In any event both do a lot better than single platform solutions.) I don't think that open sourcing kgs would harm them that much, but then it may remove an economic incentive they have in developing their software without necessarily bringing the community much of an advantage.