Skip to content

Commit

Permalink
reduced lov section
Browse files Browse the repository at this point in the history
  • Loading branch information
Ghislain committed Mar 2, 2015
1 parent 4559908 commit 4f688c5
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 42 deletions.
27 changes: 2 additions & 25 deletions paper-eswc2015/src/intro.tex
Original file line number Diff line number Diff line change
@@ -1,14 +1,9 @@
So far, Linked Data principles and practices are being adopted by an increasing number of data providers, getting as result a global data space on the Web containing hundreds of LOD datasets \cite{Heath_Bizer_2011}. There are already several guidelines for generating, publishing, interlinking, and consuming Linked Data \cite{}. An important task, within the generation process, is to build the vocabulary to be used for modelling the domain of the data sources, and the common recommendation is to reuse as much as possible available vocabularies \cite{Heath_Bizer_2011,hyland14}. This reuse approach speeds up the vocabulary development, and therefore, publishers will save time, efforts, and resources.
So far, Linked Data principles and practices are being adopted by an increasing number of data providers, getting as result a global data space on the Web containing hundreds of LOD datasets \cite{Heath_Bizer_2011}. There are already several guidelines for generating, publishing, interlinking, and consuming Linked Data \cite{}. An important task, within the generation process, is to build the vocabulary to be used for modelling the domain of the data sources, and the common recommendation is to reuse as much as possible available vocabularies \cite{}. This reuse approach speeds up the vocabulary development, and therefore, publishers will save time, efforts, and resources.

<<<<<<< HEAD
At the time of writing we have not found specific and detailed guidelines that describe how to reuse available vocabularies when building vocabularies. There are research efforts, like the NeOn Methodology \cite{}, and the W3C Working Group Note \cite{}, but they do not provide guidelines on how to reuse vocabularies at low fine grained, i.e., reusing specific classes and properties. Our claim is that this difficulty in how to reuse vocabularies at low fine grained is one of major barriers to the vocabulary development and in consequence to deployment of high quality datasets in Linked Data.
=======
At the time of writing we have not found specific and detailed guidelines that describe how to reuse available vocabularies when building vocabularies. There are research efforts, like the NeOn Methodology \cite{suarezfigueroa2012ontology}, the Best Practices for Publishing Linked Data - W3C Working Group Note \cite{hyland14}, and the work proposed by Lonsdale et al. \cite{Lonsdale2010318} but they do not provide guidelines on how to reuse vocabularies at low fine grained, i.e., reusing specific classes and properties. Our claim is that this difficulty in how to reuse vocabularies at low fine grained is one of major barriers to the vocabulary development and in consequence to deployment of Linked Data.
>>>>>>> 6bdbd8bfd4c15c90360651a4db47412d77fdbf32

In this paper we propose an initial set of guidelines for this task, and provide technological support by means of a plugin for \protege, which is one the popular framework for developing ontologies in a variety of formats including OWL, RDF(S), and XML Schema. It is backed by a strong community of developers and users in many domains. One success on \protege also lies on the availability to extend the core framework by adding more functionalities by means of plug-ins.

<<<<<<< HEAD
At the same time, the recent success of Linked Open Vocabularies (LOV\footnote{\url{http://lov.okfn.org/dataset/lov/}}) as a central point for curated catalogue of ontologies is helping to convey on best practices to publish vocabularies on the Web, as well as to help in the Data publication activity on the Web. LOV comes with many features, such as an API, a search function and a SPARQL endpoint.

Moreover, we propose in this paper to explore, design and implement a plug-in of LOV in \protege for easing the development of ontologies by reusing existing vocabularies at low fine grained level. The tool has to improve the modeling and reuse of ontologies used in the LOD cloud, by providing the following features in \protege:
Expand All @@ -18,16 +13,12 @@
\item Propose to the user a list of candidate vocabularies in LOV matching the term
\item Have an updating mechanism (synchronization) to LOV catalogue.
\item Check if a new vocabulary created in \protege satisfied the LOV recommendations \cite{pybernard12}
=======
In this paper we propose an initial set of guidelines for this task, and provide technological support by means of a plugin for \protege, which is one the popular framework for developing ontologies in a variety of formats including OWL, RDF(S), and XML Schema. It is backed by a strong community of developers and users in many domains. One success on \protege also lies on the availability to extend the core framework adding new functionalities by means of plug-ins. In addition, we propose to explore, design and implement a plug-in of LOV in \protege for easing the development of ontologies by reusing existing vocabularies at low fine grained level. The tool has to improve the modeling and reuse of ontologies used in the LOD cloud.
>>>>>>> 6bdbd8bfd4c15c90360651a4db47412d77fdbf32

\item Suggest to LOV a new created vocabulary within \protege.
\end{itemize}

This paper presents \protege, a first implementation of the LOV realized as a plugin for the ontology editor \protege.

<<<<<<< HEAD
\section{Methodological discussion? - based on Antoine discussions}
The activity of building vocabularies by reusing available vocabularies consits of
\begin{itemize}
Expand All @@ -38,21 +29,7 @@ \section{Methodological discussion? - based on Antoine discussions}
\item Reuse the terms that you need from external ontologies, but do not explicitly import them. In this you don't benefit from the axioms of the external ontology unless you copy the relevant axioms too. But you benefit from the fact that you are reusing existing terms that are possible well known. Take the example of FOAF. If you are using foaf:Person as a class for persons, even if you don't import the FOAF ontology, people will be able to use tools that are specifically made for FOAF. They will be able to query your dataset with the same SPARQL query as for other datasets using FOAF. Moreover, in case a more accurate definition of the term is needed, one can also use the "follow your nose" method which consists in looking up the URI of the term with an appropriate protocol, and getting back the ontology document. By doing so, you don't even need owl:imports very much. If you insist on having a valid OWL DL ontology, you can declare the terms by explicitly saying whether they are object properties, datatype properties, annotation properties, classes or individuals.
\item Redefine your own terms. In this case you have full control of the terms. This may be a good choice if there is only poorly designed ontologies for the terms you need. Or, you may disagree with the definitions of the other ontologies. Or you are concerned by the fact that external ontologies may change at any time, possibly disappearing completely. In any case, if you have your own terms that mirror existing terms from other ontologies, you can also provide the correspondences between your terms and the others. In that case, I recommend that you provide those outside your ontology.
\end{enumerate}
You can combine all the approaches, importing some ontologies (e.g., FOAF), reusing terms from others (e.g., Dublin Core) and making your own terms in some cases. The approach you choose in each case can be based on reasoning issues (expressiveness, size, conformance to DL, modularity), querying issues, robustness, linkage, reusability, taste, belief, mood, magnetic field, humour, weather, topology, spirituality, life :)
=======
\section{Reusing vocabulary elements when building ontologies}\label{sec:reuse}
In this section we describe the procedure of reusing available vocabulary terms when building ontologies. In a nutshell, the task of building vocabularies by reusing available vocabulary terms consists of
\begin{itemize}
\item Search for suitable vocabulary terms to reuse from the LOV repository. The search should be conducted by using the terms of the application domain.
\item Assess the set of candidate terms from LOV repository. In this particular case the results coming from LOV repository included a score for each term retrieved.
\item Select the most appropriate term taking into the account the score of the term.
\item Include the selected term in the ontology that has being developed. There are three alternatives
\begin{itemize}
\item Include the external term and define the local axioms in the local ontology.
\item Include the external term, create a local term, and define the {\tt rdfs:subClassOf/ rdfs:suPropertyOf} axiom to related both terms.
\item Include the external term, create a local term, and define the {\tt owl:equivalentClass/ owl:equivalentProperty} axiom to relate both terms. It is possible to include local axioms to the local term.
\end{itemize}
>>>>>>> 6bdbd8bfd4c15c90360651a4db47412d77fdbf32
\item You can combine all the approaches, importing some ontologies (e.g., FOAF), reusing terms from others (e.g., Dublin Core) and making your own terms in some cases. The approach you choose in each case can be based on reasoning issues (expressiveness, size, conformance to DL, modularity), querying issues, robustness, linkage, reusability, taste, belief, mood, magnetic field, humour, weather, topology, spirituality, life :)
\end{itemize}


34 changes: 17 additions & 17 deletions paper-eswc2015/src/lov.tex
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
The intended purpose of the LOV \cite{vandenbusschelov} is to help users to find and reuse terms of vocabularies in Linked Open Data. For achieving that purpose, the LOV gives access to vocabularies metadata and terms using programmatic access with APIs.
\subsection{LOV Catalogue}
LOV\footnote{\url{http://lov.okfn.org/dataset/lov/}} catalogue is a hub of curated vocabularies used in the Linked Open Data Cloud, as well as other vocabularies suggested by users for their reuse. The number of the vocabularies inserted is always growing, with more than 460 tagged by categories and languages of publication. Some of the three main features of the LOV are for:
\begin{enumerate}
\item Searching ontologies: It is the main LOV's feature is the search of vocabulary terms. These vocabularies are categorized within LOV according to the domain they address. In this way, LOV contributes to ontology search by means of (a) keyword search and (b) domain browsing.
\item Assessing ontologies: LOV provides a score for each term retrieved by a keyword search. This score can be used during the assessment stage.
\item Interconnecting ontologies: In LOV, vocabularies rely on each other in seven different ways. These relationships are explicitly stated using VOAF vocabulary\footnote{\url{http://lov.okfn.org/vocab/voaf}}.
\end{enumerate}

The intended purpose of the LOV \cite{vandenbusschelov} is to help users to find and reuse terms of vocabularies in Linked Open Data. For achieving that purpose, the LOV gives access to vocabularies metadata and terms using programmatic access with APIs.
LOV\footnote{\url{http://lov.okfn.org/dataset/lov/}} catalogue is a hub of curated vocabularies used in the Linked Open Data Cloud, as well as other vocabularies suggested by users for their reuse.
Some of the three main features of the LOV are for: (1) searching ontologies according to their scope, (2) assessing ontologies by providing a score for each term retrieved by a keyword search and (3) interconnecting ontologies using VOAF vocabulary \footnote{\url{http://lov.okfn.org/vocab/voaf}}

\subsection{LOV API}
LOV APIs give a remote access to the many functions of LOV through a set of Restful services\footnote{\url{http://lov.okfn.org/dataset/lov/apidoc/}}. %The basic design requirements for these APIs is that they should allow applications to get access to the very same information humans do via the User Interfaces. More precisely the
The APIs give access through three different type of services related to:
\begin{enumerate}
\item vocabulary terms (classes, properties, datatypes and instances) providing functions to query the LOV search engine, with autocompletion features;
\item vocabulary browsing, in which a client can get access to the current list of vocabularies contained in the LOV catalogue and search for vocabularies for further purpose;
\item agents, or the ontology's creators, contributors or organizations. It also contains the search with autocompletion of an agent.
\end{enumerate}
%\begin{enumerate}
%\item Searching ontologies: It is the main LOV's feature is the search of vocabulary terms. These vocabularies are categorized within LOV according to the domain they address. In this way, LOV contributes to ontology search by means of (a) keyword search and (b) domain browsing.
% \item Assessing ontologies: LOV provides a score for each term retrieved by a keyword search. This score can be used during the assessment stage.
% \item Interconnecting ontologies: In LOV, vocabularies rely on each other in seven different ways. These relationships are explicitly stated using VOAF vocabulary\footnote{\url{http://lov.okfn.org/vocab/voaf}}.
%\end{enumerate}

Futhermore, the LOV APIs give a remote access to the many functions of LOV through a set of Restful services\footnote{\url{http://lov.okfn.org/dataset/lov/apidoc/}}. %The basic design requirements for these APIs is that they should allow applications to get access to the very same information humans do via the User Interfaces. More precisely the
The APIs give access through three different type of services related to: (1) vocabulary terms (classes, properties, datatypes and instances), (2) vocabulary browsing and ontology's creators.

%\begin{enumerate}
% \item vocabulary terms (classes, properties, datatypes and instances) providing functions to query the LOV search engine, with autocompletion features;
% \item vocabulary browsing, in which a client can get access to the current list of vocabularies contained in the LOV catalogue and search for vocabularies for further purpose;
% \item agents, or the ontology's creators, contributors or organizations. It also contains the search with autocompletion of an agent.
% \end{enumerate}

%to continue

0 comments on commit 4f688c5

Please sign in to comment.