The aim of the Semantic Web and Linked Data principles is to create a web of data that can be processed by machines. The web of data is seen as a single globally distributed dataset. During the years, an increasing amount of data was published on the Web. In particular, large knowledge bases such as Wikidata, DBPedia, LinkedGeoData, and others are freely available as Linked Data and SPARQL endpoints. Exploring and performing reasoning tasks on such huge knowledge graphs is practically impossible. Moreover, triples involving an entity can be distributed among different datasets hosted by different SPARQL endpoints. Given an entity of interest and a task, we are interested into extracting a fragment of knowledge relevant to that entity, such that the results of the given task performed on the fragment are the same as if the task was performed on the whole web of data. Here we propose a system, called KRaider (“Knowledge Raider”), for extracting the relevant fragment from different SPARQL endpoints, without the user knowing their location. The extracted triples are then converted into an OWL ontology, in order to allow inference tasks. The system is part of a - still under development - framework called SRL-Frame (“Statistical Relational Learning Framework”).
|Titolo:||KRaider: A crawler for linked data|
COTA, Giuseppe (Primo) (Corresponding)
|Data di pubblicazione:||2019|
|Appare nelle tipologie:||04.2 Contributi in atti di convegno (in Volume)|