Monday, November 30, 2009

Registry for original provider HTML pages

If you weren't aware, the Bio2RDF project offers both RDF, and a service that redirects to either HTML, images, or other non-RDF sources that could be useful.

The HTML redirect service is particularly useful, because one can start at the Bio2RDF page, and follow a link that looks like "", to get to the original providers web page.

There are currently 142 namespaces that are registered along with HTML pages. Examples of these links are, the NextBio page for Amyloid Beta precursor protein (, the NCBI Entrez Geneid page for Superoxide dismutase 1 (, the Pharmgkb page for Superoxide dismutase 1(, and the HGNC page for Superoxide dismutase 1 (

The list below, details the namespace prefixes that are currently registered with Bio2RDF for this service. A full set of details about what services are provided for any particular namespaces are provided at here, and the entire RDF configuration that makes the Bio2RDF system work is available here (RDF/XML)

aceview, agi_locuscode, arrayexpress, asap, aspgd, aspgd_locus, aspgd_ref, bind, biogrid, biomodels, biopatml, biosystems, brenda, cas, cath, ccds, cdd, cgd, cgsc, chebi, chemidplus, cid, citations, cog, cpath, cpd, dbpedia, dbsnp, ddbj, dictybase, dictybase_trials, dip, doi, dr, drugbank_drugs, ec, echobase, eck, ecogene, embl, ensembl, enzyme, flybase, gdb, genbank, genedb_pfalciparum, genedb_spombe, geneid, gi, gl, go, goa_ref, gopubmed, gr, gr_gene, gr_protein, gr_qtl, gr_ref, h-invdb, h_inv, hgnc, homologene, hpa, hpa_antibody, hprd, huge_navigator, hugo, intact, interpro, ipi, iproclass, isbn, issn, keywords, lifedb, linkedct_trials, ma, mesh, metacyc, mgc, mgi, msdchem, myexp_user, myexp_workflow, nar, ncbi, nextbio, nist_chemistry_webbook, nmrshiftdb_molecule, oclc, omim, pamgo_vmd, path, pathguide, pdb, pdbsum, pfam, pharmgkb, phosphosite, po, prints, prodom, prosite, pseudocap, psimod, pubchem, pubmed, reactome, rebase, refseq, rgd, rn, scop, seed, sgd, sgd_locus, sgd_ref, sgn, sgn_ref, sid, sider_drugs, sider_sideeffects, smart, so, srs, swoogle, symbol, tair_arabidopsis, taxon, taxonomy, tc, tgd_locus, tgd_ref, um-bbd, uniparc, uniprot, uniref, unists, wikipathways, wikipedia, xenbase, zfin

If you know of a biological database that has webpages for their items and is not listed here then feel free to comment about it here or email the group at

Monday, September 14, 2009

Linking Open Drug Data wins the Triplify challenge

Congratulations to Kei's group and their Linking Open Drug Data (LODD) project for winning the Triplify challenge.

It is a new contribution to the LOD cloud and they have linked those new datasets to Bio2RDF and DBpedia URIs. That is the right way to do it !

Sunday, August 16, 2009

HOWTO: Using Bio2RDF

The Bio2RDF URI is formed by taking a datasource and assigning a prefix to it. The prefix is a string which is only allowed to contain letters, numbers, the underscore (_), and the hyphen (-). The unique identifier for each object inside of the namespace, as the primary key for an object, is then included with the namespace prefix to make up the Bio2RDF URI, In this example a user wants to find information about Propanolol, and they know there is a Wikipedia article about the topic. Since DBpedia mirrors the Wikipedia structure and represents it using RDF, they could go to

If the user then wants to find out where the Wikipedia article Propanolol is referenced in other databases, they can go to (may take a long time given the number of databases that are being used). If they know they only need to find out where the article is referenced in DrugBank, they can use (should be much quicker because the number of databases is reduced here).

There is also search functionality embedded into the Bio2RDF system. Searches can be conducted on particular namespaces, or across the entire Bio2RDF system. If a user wants to conduct a search on namespace "chebi" for instance, and they want to search for "propanolol", they could go to If they then also wish to search for "propanolol" including the other namespaces they can go to (this may be slow because of the number of databases that are available for search).

If a namespace has been configured with the ability to redirect to its original interface the redirection can be triggered by sending users to . For example, a user might be interested in (the DrugBank identifier for Propanolol), and they want to see the original DrugBank interface. They could then go to and their browser would be redirected to the description of that drug on the original DrugBank interface. Although not all namespaces have their original HTML interfaces encoded into the Bio2RDF system, some do, and it is a useful way of getting back to the non-RDF web.

If someone is interested in taking the Bio2RDF RDF versions and using them internally, they can make sure they request either of the supported RDF formats (RDF/XML and N3), but adding /rdfxml/ or /n3/ to the front of any of the URL's they desire. Each of the links given for URI's in this post have been to request the Bio2RDF HTML versions using /page/, but they can equivalently be requested using or respectively for RDF/XML and N3 for example.

There are also advanced features for people wanting to determine the provenance of particular documents, since RDF doesn't natively support provenance for individual statements when multiple sources are merged into single documents, as Bio2RDF does. If the user wishes to know which sources of information were used in a particular document they can insert /queryplan/ at the start of the URI in order to get its provenance information This information is returned as a set of objects, including Query Types, Providers and Namespaces, among other things. This information can then be used to recreate the exact set of queries, both SPARQL and otherwise, that were used to access the information, as long as the user has access to all of the provider endpoints in the query plan. In order to replicate the queries, users could perform a SPARQL query on the resulting document such as "SELECT ?endpoint ?query WHERE { ?queryBundle a <> . ?queryBundle <> ?query . ?queryBundle <> ?endpoint . }". This query may not return exactly the same results, as there are also normalisation rules, which require knowledge of the Provider configuration in use (all of which is included in the document). To get these a more advanced query that referenced the "rdf:type to query for is" predicate that is also attached to each querybundle would be required in order to determine which Provider was being used, and which RDF Normalisation rules (predicate to query for is were required by that provider configuration.

If there are too many results to return in one hit from a particular endpoint, the results given to the user will not be complete. Although there is currently no way of signalling this to users in the RDF document, users can manually inspect the queryplan to determine what the maximum will be and if the number of results is equal to or greater than this number, they can request subsequence offsets using the /pageoffsetNN/ mechanism, where NN is one or more digits indicating which page of results are being requested. /pageoffset32/ for instance would be interpreted as the 32nd page of results, while /pageoffset1/ is the first page, which is the default if nothing is specified. Each pageoffset may not return the same number of results because the resolution is implemented by distributing queries across endpoints, and it is not efficient (or possible in some cases), to query endpoints for the number of results before getting the information, and there is no natural ordering between the results returned by different endpoint. The resolver should be interpreted to be returning at least NNNN results from each endpoint where possible, and the distinct set of RDF statements that occur in these results are included in the document that is shown to the user. The default limit for the Bio2RDF system is currently 2000, so users can know if they receive more than 2000 results that they may be able to request the next pageoffset, ie, /pageoffset2/, etc., in order to retrieve more results if possible. Some queries may not include the limit as part of the query, and hence they will also not return different results for each pageoffset, so users should be careful that they don't request too many pageoffsets for this reason. The HTML interface for paging requests a maximum of 20 pageoffsets if needed, so links to the other pageoffsets are not picked up by robots (although /pageoffsetNN/ links should not be followed by robots as specified in the Bio2RDF robots.txt file).

The pageoffset can be included together with other instructions about the format and whether the query plan is required in the following order, with each part optional (except for the query) /FORMAT/queryplan/pageoffsetNN/query, where /FORMAT/ can be /rdfxml/, /n3/ or /page/, /queryplan/ is used to get the information about how the query would be resolved without performing the query, and the NN in the pageoffset section determines which page to resolve. For example, the HTML version of the queryplan for the 2nd pageoffset for the "linksns/drugbank_drugs/dbpedia:Propranolol" query can be found using A known issue is that the URL links to the RDF/XML and N3 versions at the bottom of the HTML page will request the actual query instead of the queryplan and it will also not have the pageoffset. This will be fixed in a future version, but if the URL is constructed in the correct way it will still currently work.

Because of the way the HTML redirections have been included into the system, requesting the queryplan for the HTML redirection encoded in N3 looks like /n3/queryplan/html/drugbank_drugs:DB00571, since the query in this case is "html/drugbank_drugs:DB00571", and the other parts are used to define the result format and provenance record being required respectively.

Monday, July 20, 2009

The story so far of Linked Data, Bio2RDF is part of it !

In the latest publication of Tim Berner-Lee, he tells the recent story of emerging Linked Data, Bio2RDF is mentioned as an important Biology contributor. This paper is a must for anyone interested in this fantastic new approach.

In this map of Linked Data, Bio2RDF contribution is shown in purple. The corresponding SPARQL endpoints are available here :

Wednesday, July 01, 2009

Bio2RDF is now using Virtuoso 6 and its new facet browser

Bio2RDF is moving from Virtuoso 5 to Virtuoso 6 server. The new software support facet browsing in real time.

We invite you to explore our graph with a full text search query for hexokinase. Once the results list is shown try the options in the right menu. Enjoy the discovery experience.

Try the 2009 version of "Atlas about Human and Mouse" :

the graph can also be queried in sparql :

The list of the Bio2RDF converted graph will be published and updated here :

The facet browsers list :
The sparql endpoints list :

Bio2RDF visit at HCLS annual meeting

Bio2RDF team members Marc-Alexande Nolin, Michel Dumontier and Francois Belleau, have been invited to present actual state of the Bio2RDF project at the annual face to face meeting of the HCLS community. Here is a link to the presentation :

Thanks to the organizers of the event.

Monday, June 29, 2009

0.6.1 bug fix release now available

A maintenance release, version 0.6.1 was released today on sourceforge [1]. There were a few coding bugs in the 0.6.0 release relating to the namespace match method "all", the rdf rule order was not being imported from the configuration properly resulting in queries which relied on more than one rule not getting any results back, and included static RDF/XML sections were not being included. There was also a fix related to default providers that eliminates duplicate queries for namespaces where a namespace was assigned to a default provider for a query that allowed default providers.

The configuration files have also been updated, although people using the live configuration method (the default) would have received the configuration changes already. Some performance improvements related to logging have also been made that in some circumstances will dramatically improve the performance of the package, although the majority of the overall request latency is still related to internet latency related to the SPARQL queries.

From this version on, I will also be releasing MD5 hashes for each of the downloaded files so people can check that their downloaded file matches the release on sourceforge.


Tuesday, June 23, 2009

Version 0.6.0 of the Bio2RDF server software released

The next version of the Bio2RDF software, version 0.6.0 was released today on sourceforge [1]

It has some major feature additions over the previous version, with the highlights being an RDF based configuration, the ability to update the configuration while the server is running, and support for sophisticated profiles so that users can pick and choose sources without having to change the basic configuration sources that are shared between different users. If users want to add or subtract from the base configuration they can create a small RDF file on their server and use that file to pick which sources they want to use and which queries they want to be able to execute.

If anyone wants to check out the example [2] and use it as a guide to mock up some SPARQL queries or definitions for endpoints that go with the queries it would be great to see what other resources we can combine into the global Bio2RDF configuration. If you need pointers in how to get your own configuration working feel free to ask me.


Friday, May 08, 2009

Version 0.5.0 of the Bio2RDF server software released

The next version of the server software has been released on sourceforge. [1]

It contains a number of changes that will hopefully make it more useful for the tasks we want to do with linked rdf queries.

One major one is the introduction of content negotiation, which has been tested for N3 (using text/rdf+n3) and RDF/XML (using application/rdf+xml). It was made possible this quickly after the last release by the use of the content negotiation code from Pubby, the driver behind the DBpedia web interface and URI resolution mechanism. It is also possible to explicitly get to the N3 format currently by prefixing the URL with /n3/ See [2] for an example. The ability to explicitly get to the RDF/XML will be added in future.

Another change that will hopefully be useful is the introduction of clear RDF level error messages when either the syntax of a URI is not recognised, or the syntax was recognised but there were no providers that were relevant to the URI. See [3], [4] and [5] for a demonstration of the error messages.

There is also the ability to page through the results, which is necessary when there are more than 2000 results to a query from a particular endpoint. To use the paging facility the URI needs to be prefixed by /pageoffsetNN/, where NN is a number indicating which page you would like to look at. The queries are not ordered currently, but in the short term it would be reasonable to believe that they should be consistent enough to get through all of the results. Ordered queries take a lot longer than unordered queries, so it is unlikely that the public mirrors will ever introduce ordered queries. An example of the paging URL could be [6] or [7].

There is also the ability to get an RDF document describing what actions would be taken for a particular query. It is interoperable with the /n3/ and /pageoffsetNN/ URI manipulations so URI's like [8] can be made up and resolved. This RDF document is setup to contain all of the necessary information for the client to then complete the query with their own network resources if necessary. In future, clients should be able to patch into this functionality without having to keep a local copy of the configuration on hand, although a distributed configuration idea is also in the works for sometime in the future. Currently the distribution is readonly from [9]. The [9] URL has also been made content negotiable for HTML/RDFXML/N3 content types, with a default to HTML if the content type is not recognised by the Sesame Rio library, but it can still be accessed in a particular format without content negotiation by appending /html /n3 or /rdfxml .

Since the last release the GeoSpecies dataset has also been partially integrated, although it doesn't seem to have a sparql endpoint so currently it is only available for basic construct queries. [10] Not all of the namespaces inside the geospecies dataset have rules for normalisation to Bio2RDF URI syntax, but the rest will be integrated eventually.

The order of normalisation rules is now respected when applying them, with lower numbers being applied before higher numbers. Numbers with the same order cannot be relied on to be applied in a consistent manner if they overlap syntactically.

The MyExperiment SPARQL endpoint [11] has also been integrated into Bio2RDF since the last release, so for instance, a user in the MyExperiment system can be resolved using [12], but there are also other things like workflows which could in the future provide valuable interconnections for the linked rdf web. Further integration with MyExperiment would be invaluable to the future of the Bio2RDF network I think.

Partial support for INCHI resolution has also made it into this release, although there are some syntax bugs with that stop Sesame being able to parse the resulting RDF/XML so the inchi's are only being resolved using pubchem so far. Some INCHI's, particularly those which contain + signs will also be unresolvable for the current time because the Apache HTTPD and Apache Tomcat and URLRewrite stack we are using unurlencodes the plus signs to spaces somewhere along the line and it is hard to figure out what configuration is needed to avoid it happening. It was hard enough figuring out how to make encoded slashes (%2F) usable inside identifiers (they need to be double encoded as %252F to avoid detection by the HTTPD/Tomcat/URLRewrite algorithms), so I am not sure what progress will be made with the plus signs in the near future.

DOI resolution has also been integrated from both the Uniprot Citations database and the, but will likely only be fully useful for science related DOI's I think.

There are currently 368 namespaces known by the server software for Bio2RDF, with 231 information provider configurations (although the real number of providers is less than this due to duplication on a few providers to enable reverseconstruct, and unpercentencoded queries where necessary) The number of combinations that are currently encapsulated by the server configuration can be found at [13]

It is hard to believe so much could be packed into a new release two weeks after the last release!

See the complete list of changes at [14].

If anyone has alternative configurations that they have made up using the software I am more than willing to include them in the distribution so others can utilise them. The configuration file syntax is still in flux, and won't likely become stable until the 1.0 release, but it is mostly additions to support new features, so configurations based on older software versions are still useful and able to be migrated to the new scheme.


Tuesday, April 21, 2009

2,4 billions triples of Bioinformatics RAW DATA NOW

In his recent talk at TED, Tim Berner Lee invited the data provider to make available data in RDF format to help the building process of linked data web. He asked them to offer RAW DATA NOW.

We totally share this approach in the Bio2RDF community, our goal is to make public datasets from the bioinformatics community available in RDF format via standard SPARQL endpoints (Virtuoso server is used for that). We strongly believe in the semantic web approach to solve science problem but we do not want to wait for data provider to do the RAW DATA conversion job. Converting data to RDF is not fun, we did a lot of this dirty job, and here are the results for actual Bio2RDF release of 34 data sources.

Our current datasets in N3 format are available here :

We invite semantic search engine provider to index these files.

The way we produce them is documented in our Wiki at SourceForge in the Cookbook section :

The actual list of SPARQL endpoints in the linked data cloud is hosted here :

Bio2RDF 2,4 billions triples graph of linked data represents 51 % of the actual global linked data graph size.

Finally, this is what this highly connected knowledge world look like.

I would take this occasion to thanks all the enthusiast biologist and researcher who invest themselves by annotating article, protein and gene product. Without this essential work of connecting documents and concepts together, this project would not have been possible.

For the 20th anniversary of the web, I would also want to thanks Tim Berner Lee for his inspiring vision. Bio2RDF may not be the awaited killer app of the life science to demonstrate the semantic web potential, but let's say that it is only the beginning of the linked data cloud build by and for scientists.

The WWW2009 workshop Linked Data on the Web (LDOW2009) was held today, I would like to say how important the work of this community is. Finally a last word to congratulate Virtuoso team and especially Orri Erling for his fantastic work with the new Virtuoso 6.0 server soon to be released. I cannot wait to see Bio2RDF data into this amazing engine.

Bio2RDF's map new graphic representation

This word net represents the actual namespace connection between Bio2RDF SPARQL endpoints. RDF datasets which were analyzed comes from Bio2RDF's download page. These representations are generated with Many Eyes visualization tools.

Static version.

This graph represent connections between namespaces of Bio2RDF's network graph of SPARQL endpoints, highlighted orange dots corresponds to Bio2RDF rdfised database.

Static version.

Thursday, April 02, 2009

New Bio2RDF query services

The 0.3 release provides the ability to link to licence providers, so the applicable license for a namespace may be available by following a URL. The URL syntax for this is /license/namespace:identifier . It was easier to require the identifier to be present than to not have it. So far, the identifier portion is not being used, so it merely has to be present for the URL resolution to occur, but in future there is the allowance to have different licenses being given based on the identifier, which is useful for databases which are not completely released under a single license.

We provide countlinks and countlinksns which count the number of reverse links to a particular namespace and identifier, from all namespaces, or from within a given namespaces respectively. Currently these only function on virtuoso endpoints due to their use of aggregation extensions to SPARQL. The URL syntax is /countlinks/namespace:identifier and /countlinksns/targetnamespace/namespace:identifier

There is also the ability to count the number of triples in each SPARQL endpoint that point to a given Bio2RDF URI (or its equivalent identifier for non-Bio2RDF SPARQL endpoints). This ability is provided using /counttriples/namespace:identifier

We also provide search and searchns, which attempt to search globally using SPARQL (aren't currently linked to the rdfiser search pages which may be accessed using certain searchns URI's), or search within a particular namespace for text searches. The searches are all performed using the virtuoso fulltext search paradigm, ie, bif:contains, and other sparql endpoints haven't yet been implemented even with regex because it is reasonably slow but it would be simple to construct a query if people thought it was necessary. The URL syntax is /search/searchTerm and /searchns/targetnamespace:searchTerm

The coverage of each of these queries over the current Bio2RDF namespaces can be found here.

If anyone has any (possibly already SPARQL) queries on biology related databases that they regularly execute that can either be parameterised or turned into Pipes then it would be great to include them in future distributions for others to use.

RDF use and generation improvements

The 0.3 version of the Bio2RDF Servlet implements true RDF handling in the background to provide consistency of output and the potential to support multiple output formats such as NTriples and Turtle in the future, although the only output currently supported is RDF/XML. The Sesame library is being used to provide this functionality.

Provide more RDFiser scripts as part of the source distribution, including Chebi, GO, Homologene, NCBI Geneid, HGNC, OBO and Ecocyc along with guides on the Bio2RDF wiki about how to use the scripts to regenerate new RDF versions using future versions of each database.

Live recent network statistics available

The 0.3 releases provide the ability to show live statistics to diagnose some network issues without having to look at log files. The URL is /admin/stats
  • Shows the last time the internal provider blacklist reset, indicating how much activity is being displayed as the statistics are reset everytime the blacklist is reset. This blacklist is only implemented to prevent malfunctioning queries from being further communicated with.
  • By default shows the IP's accessing the server, with an indication of the total number and duration of their queries. Can be configured in low use and private situations to also show the queries being performed
  • Shows the servers which have been unresponsive since the last blacklist reset including a basic reason, such as an HTTP 503 or 400 error
There is also a live blacklisting functionality provided in version 0.3.2 to prevent crawlers who regularly utilise functionality that they shouldn't according to the Bio2RDF robots.txt file. The settings for this have been set rather high by default, and this functionality can be turned off completely by people who download and install the package and datasets locally. Specifically, a regular user of the public mirrors should make sure that they are not making either more than 40 requests in each 12 minute statistics period, or if they are making more than 40 requests in each 12 minute period, more than 25% of the queries should be for non-Robots.txt queries. These parameters will possibly change depending on further investigation. An individual can access /error/blacklist even if they are not blacklisted currently to show a list of requests from their IP address since the start of the last 12 minute statistics period.

Support provided for more non-Bio2RDF providers

The 0.3 Bio2RDF Servlet release implements support for more non-Bio2RDF SPARQL endpoints such as LinkedCT, DrugBank, Dailymed, Diseasome, Neurocommons, DBPedia, and Flyted/Flybase .

The relevant namespaces for these inside of Bio2RDF are:
  • DBpedia - dbpedia, dbpedia_property, dbpedia_class
  • LinkedCT - linkedct_ontology, linkedct_intervention, linkedct_trials, linkedct_collabagency, linkedct_condition, linkedct_link, linkedct_location, linkedct_overall_official, linkedct_oversight, linkedct_primary_outcomes, linkedct_reference, linkedct_results_reference, linkedct_secondary_outcomes, linkedct_arm_group
  • Dailymed - dailymed_ontology, dailymed_drugs, dailymed_inactiveingredient, dailymed_routeofadministration, dailymed_organization
  • DrugBank - drugbank_ontology, drugbank_druginteractions, drugbank_drugs, drugbank_enzymes, drugbank_drugtype, drugbank_drugcategory, drugbank_dosageforms, drugbank_targets
  • Diseasome - diseasome_ontology, diseasome_diseases, diseasome_genes, diseasome_chromosomallocation, diseasome_diseaseclass
  • Neurocommons - Uses the equivalent Bio2RDF namespaces, with live owl:sameAs links back to the relevant Neurocommons namespaces. Used for pubmed, geneid, taxonomy, mesh, prosite and go so far
  • Flyted/Flybase - Not converted yet, only direct access provided using search functionalities
Provide live owl:sameAs references which match the URI's used in SPARQL queries to keep linkages to the original databases without leaving the Bio2RDF database:identifier paradigm, so if people know the DBPedia, etc., URI's, the link to their current knowledge is given

Some URI's are produced by the owl:sameAs additions, but these aren't standard, and are only shown where there is still at least one SPARQL endpoint available which still uses them. People should utilise the versions when linking to Bio2RDF.

Any further contributions to this list, or additions of other datasets which already utilise Bio2RDF URI's would be very useful! See the list of namespaces already implemented here.

Provider, query and namespace statistics now available

At the time of posting Bio2RDF supported:
  • 230 namespaces
  • 35 different internal query titles (some of these map to the same URI pattern, so there are not this many URI query options)
  • 140 provider options, including a large number of /html/database:identifier providers which redirect to HTML pages which describe the Bio2RDF Identifier as well as the Bio2RDF SPARQL endpoints
More statistics can be found here

A list of the actual provider URL's mapped back to namespaces and queries can be found by downloading the Bio2RDF Servlet and changing a setting in to make the page more verbose. If the setting were turned on for the public mirrors it would result in a very large file each time.

LSID support for Bio2RDF

From release 0.3.2 of the Bio2RDF Servlet, any URI similar to will be accessible using its equivalent LSID, with as the proxy, using . The LSID syntax will not be available for use with custom services such as or

This will NOT become the standard identifier, but it provides compatibility with some users who wish to utilise LSID's.

Monday, March 30, 2009

Bio2RDF and Semantic Web Pipes

The Bio2RDF Servlet has been packaged with Semantic Web Pipes. It provides runtime support for pipes, without the designer. Pipes you design at either the public pipes website, or your own pipes webapp, will run inside your Bio2RDF server, providing another method for scripting your queries.

Once you download and install the Servlet you will be able to access the pipes functionality using URL's which look like the following:


Each of the Parameter's in the pipe are entered using "name=value" combinations and put together using "/".

Download the latest Bio2RDF Servlet to experiment.

Saturday, March 21, 2009

Bio2RDF's contribution to the GGG is on the map

I am very pleased to see that Bio2RDF contribution is now on the GGG map of linked data. A big thanks to all the data provider and the active members of the Bio2RDF group. All the SPARQL endpoints we provides are not there yet but it is a great beginning.

Thursday, February 05, 2009

When Bio2RDF meets Taverna

Try this Taverna workflow to explore the possibilities of building a mashup on the fly from Bio2RDF's sparql endpoints.

What is known about HIV using Bio2RDF's SPARQL endpoints ?