Papers, workshop and tutorials accepted at ESWC 2019

eswc2019We are very pleased to announce that our group got 2 papers accepted for presentation at the ESWC 2019: The 16th edition of The Extended Semantic Web Conference, which will be held on June 2-6, 2019 in Portorož, Slovenia.

The ESWC is a major venue for discussing the latest scientific results and technology innovations around semantic technologies. Building on its past success, ESWC is seeking to broaden its focus to span other relevant related research areas in which Web semantics plays an important role. ESWC 2019 will present the latest results in research, technologies and applications in its field. Besides the technical program organized over twelve tracks, the conference will feature a workshop and tutorial program, a dedicated track on Semantic Web challenges, system descriptions and demos, a posters exhibition and a doctoral symposium.

Here are the pre-prints of the accepted papers with their abstract:

Abstract: Attention-based encoder-decoder neural network models have recently shown promising results in goal-oriented dialogue systems. However, these models struggle to reason over and incorporate state-full knowledge while preserving their end-to-end text generation functionality. Since such models can greatly benefit from user intent and knowledge graph integration, in this paper we propose an RNN-based end-to-end encoder-decoder architecture which is trained with joint embeddings of the knowledge graph and the corpus as input. The model provides an additional integration of user intent along with text generation, trained with multi-task learning paradigm along with an additional regularization technique to penalize generating the wrong entity as output. The model further incorporates a Knowledge graph entity lookup during inference to guarantee the generated output is state-full based on the local knowledge graph provided. We finally evaluated the model using the BLEU score, empirical evaluation depicts that our proposed architecture can aid in the betterment of task-oriented dialogue system‘s performance.

Abstract: Nowadays the organization of scientific events, as well as submission and publication of papers, has become considerably easier than before. Consequently, metadata of scientific events is increasingly available on the Web, albeit often as raw data in various formats, immolating its semantics and interlinking relations. This leads to restricting the usability of this data for, e.g., subsequent analyses and reasoning. Therefore, there is a pressing need to represent this data in a semantic representation, i.e., Linked Data. We present the new release of the EVENTSKG dataset, comprising comprehensive semantic descriptions of scientific events of eight computer science communities. Currently, EVENTSKG is a 5-star dataset containing metadata of 73 top-ranked event series (about 1,950 events in total) established over the last five decades. The new release is a Linked Open Dataset adhering to an updated version of the SEO Scientific Events Ontology, a reference ontology for event metadata representation, leading to richer and cleaner data. To facilitate the maintenance of EVENTSKG and to ensure its sustainability, EVENTSKG is coupled with a Java API that enables users to create/update events metadata without going into the details of the representation of the dataset. We shed light on events characteristics by demonstrating an analysis of the EVENTSKG data, which provides a flexible means for customization in order to better understand the characteristics of top-ranked CS events.

Acknowledgment

This work was partly supported by the European Union‘s Horizon 2020 funded projects WDAqua (grant no. 642795), ScienceGRAPH project (GA no.~819536), and Cleopatra (grant no. 812997), as well as the BmBF funded project Simple-ML.

Furthermore, we are pleased to inform that we got a workshop and two tutorials accepted, which will be co-located with the ESWC 2019.

Here is the accepted workshop and tutorials with their short description:

 

  • Workshops
    • 1st Workshop on Large Scale RDF Analytics (LASCAR-19) by Hajira Jabeen, Damien Graux, Gezim Sejdiu, Muhammad Saleem and Jens Lehmann.
      Abstract: This workshop on Large Scale RDF Analytics (LASCAR) invites papers and posters related to the problems faced when dealing with the enormous growth of linked datasets, and by the advancement of semantic web technologies in the domain of large scale and distributed computing. LASCAR particularly welcomes research efforts exploring the use of generic big data frameworks like Apache Spark, Apache Flink, or specialized libraries like Giraph, Tinkerpop, SparkSQL etc. for Semantic Web technologies. The goal is to demonstrate the use of existing frameworks and libraries to exploit Knowledge Graph processing and to discuss the solutions to the challenges and issues arising therein. There will be a keynote by an expert speaker, and a panel discussion among experts and scientists working in the area of distributed semantic analytics. LASCAR targets a range of interesting research areas in large scale processing of Knowledge Graphs, like querying, inference, and analytics, therefore we expect a wider audience interested in attending the workshop.
  • Tutorials
    • SANSA’s Leap of Faith: Scalable RDF and Heterogeneous Data Lakes by Hajira Jabeen, Mohamed Nadjib Mami, Damien Graux, Gezim Sejdiu, and Jens Lehmann.
      Abstract: Scalable processing of Knowledge Graphs (KG) is an important requirement for today’s KG engineers. Scalable Semantic Analytics Stack (SANSA) is a library built on top of Apache Spark and it offers several APIs tackling various facets of scalable KG processing. SANSA is organized into several layers: (1) RDF data handling e.g. filtering, computation of RDF statistics, and quality assessment (2) SPARQL querying (3) inference reasoning (4) analytics over KGs. In addition to processing native RDF, SANSA also allows users to query a wide range of heterogeneous data sources (e.g. files stored in Hadoop or other popular NoSQL stores) uniformly using SPARQL. This tutorial, aims to provide an overview, detailed discussion, and a hands-on session on SANSA, covering all the aforementioned layers using simple use-cases.
    • Build a Question Answering system overnight by Denis Lukovnikov, Gaurav Maheshwari, Jens Lehmann, Mohnish Dubey and Priyansh Trivedi
      Abstract: With this tutorial, we aim to provide the participants with an overview of the field of Question Answering over Knowledge Graphs, insights into commonly faced problems, its recent trends, and developments. In doing so, we hope to provide a suitable entry point for the people new to this field and ease their process of making informed decisions while creating their own QA systems. At the end of the tutorial, the audience would have hands-on experience of developing a working deep learning based QA system.

 

 

 

 


Looking forward to seeing you at The ESWC 2019.

Demo and workshop papers accepted at The WEBConference (ex WWW) 2019

theweb2019We are very pleased to announce that our group got a demo paper accepted for presentation at the 2019 edition of The Web Conference (30th edition of the former WWW conference), which will be held on May 13-17, 2019, in San Francisco, US.

The 2019 edition of The Web Conference will offer many opportunities to present and discuss latest advances in academia and industry. This first joint call for contributions provides a list of the first calls for: research tracks, workshops, tutorials, exhibition, posters, demos, developers’ track, W3C track, industry track, PhD symposium, challenges, minute of madness, international project track, W4A, hackathon, the BIG web, journal track.

Here is the pre-print of the accepted paper with its abstract:

Abstract: Squerall is a tool that allows the querying of heterogeneous, large-scale data sources by leveraging state-of-the-art Big Data processing engines: Spark and Presto. Queries are posed on-demand against a Data Lake, i.e., directly on the original data sources without requiring prior data transformation. We showcase Squerall’s ability to query five different data sources, including inter alia the popular Cassandra and MongoDB. In particular, we demonstrate how it can jointly query heterogeneous data sources, and how interested developers can easily extend it to support additional data sources. Graphical user interfaces (GUIs) are offered to support users in (1) building  intra-source queries, and (2) creating required input files.

 

Furthermore, we are pleased to inform that we got a workshop paper accepted at the 5th Workshop On Managing The Evolution And Preservation of The Data Web, which will be co-located with TheWebConference 2019.

The MEPDaW’19 aims at addressing challenges and issues on managing Knowledge Graph evolution and preservation by providing a forum for researchers and practitioners to discuss, exchange and disseminate their ideas and work, to network and cross-fertilise new ideas.

Here is the accepted workshop paper with its abstract:

Abstract: Knowledge graphs are dynamic in nature, new facts about an entity are added or removed over time. Therefore, multiple versions of the same knowledge graph exist, each of which represents a snapshot of the knowledge graph at some point in time. Entities within the knowledge graph undergo evolution as new facts are added or removed. The problem of automatically generating a summary out of different versions of a knowledge graph is a long-studied problem. However, most of the existing approaches limit to pair-wise version comparison. Making it difficult to capture complete evolution out of several versions of the same graph. To overcome this limitation, we envision an approach to create a summary graph capturing temporal evolution of entities across different versions of a knowledge graph. The entity summary graphs may then be used for documentation generation, profiling or visualization purposes. First, we take different temporal versions of a knowledge graph and convert them into RDF molecules. Secondly, we perform Formal Concept Analysis on these molecules to generate summary information. Finally, we apply a summary fusion policy in order to generate a compact summary graph which captures the evolution of entities.

Acknowledgment
This research was supported by the German Ministry of Education and Research (BMBF) in the context of the project MLwin (Maschinelles Lernen mit Wissensgraphen, grant no. 01IS18050F).


 

Looking forward to seeing you at The Web Conference 2019.

Paper accepted at Knowledge-Based Systems Journal

KBS-JournalWe are very pleased to announce that our group got a paper accepted at the Knowledge-Based Systems Journal.

Knowledge-Based Systems is an international, interdisciplinary and applications-oriented journal. This journal focuses on systems that use knowledge-based (KB) techniques to support human decision-making, learning, and action; emphases the practical significance of such KB-systems; its computer development and usage; covers the implementation of such KB-systems: design process, models and methods, software tools, decision-support mechanisms, user interactions, organizational issues, knowledge acquisition and representation, and system architectures.

Here is the accepted paper with its abstract:

Abstract: Noise is often present in real datasets used for training Machine Learning classifiers. Their disruptive effects in the learning process may include: increasing the complexity of the induced models, a higher processing time and a reduced predictive power in the classification of new examples. Therefore, treating noisy data in a preprocessing step is crucial for improving data quality and to reduce their harmful effects in the learning process. There are various filters using different concepts for identifying noisy examples in a dataset. Their ability in noise preprocessing is usually assessed in the identification of artificial noise injected into one or more datasets. This is performed to overcome the limitation that only a domain expert can guarantee whether a real example is indeed noisy. The most frequently used label noise injection method is the noise at random method, in which a percentage of the training examples have their labels randomly exchanged. This is carried out regardless of the characteristics and example space positions of the selected examples. This paper proposes two novel methods to inject label noise in classification datasets. These methods, based on complexity measures, can produce more challenging and realistic noisy datasets by the disturbance of the labels of critical examples situated close to the decision borders and can improve the noise filtering evaluation. An extensive experimental evaluation of different noise filters is performed using public datasets with imputed label noise and the influence of the noise injection methods are compared in both data preprocessing and classification steps.

Paper accepted at EDBT 2019

EDBT-ICDT-LisboaWe are very pleased to announce that our group got a paper accepted for presentation at The 2019 edition of The EDBT conference, which will be held on March 26-29, 2019 – Lisbon, Portugal.

The International Conference on Extending Database Technology is a leading international forum for database researchers, practitioners, developers, and users to discuss cutting-edge ideas, and to exchange techniques, tools, and experiences related to data management.

Here is the pre-print of the accepted paper with its abstract:

Abstract: Point of Interest (POI) data constitutes the cornerstone in many modern applications. From navigation to social networks, tourism, and logistics, we use POI data to search, communicate, decide and plan our actions. POIs are semantically diverse and spatio-temporally evolving entities, having geographical, temporal, and thematic relations. Currently, integrating POI datasets to increase their coverage, timeliness, accuracy and value is a resource-intensive and mostly manual process, with no specialized software available to address the specific challenges of this task. In this paper, we present an integrated toolkit for transforming, linking, fusing and enriching POI data, and extracting additional value from them. In particular, we demonstrate how Linked Data technologies can address the limitations, gaps and challenges of the current landscape in Big POI data integration. We have built a prototype application that enables users to define, manage and execute scalable POI data integration workflows built on top of state-of-the-art software for geospatial Linked Data. This application abstracts and hides away the underlying complexity, automates quality-assured integration, scales efficiently for world-scale integration tasks, and lowers the entry barrier for end-users. Validated against real-world POI datasets in several application domains, our system has shown great potential to address the requirements and needs of cross-sector, cross-border and cross-lingual integration of Big POI data.

Acknowledgment

This work was partially funded by the EU H2020 project SLIPO (#731581).

Looking forward to seeing you at The EDBT 2019 conference.

Paper accepted at Oxford Bioinformatics Journal

oxfordWe are very pleased to announce that our group got a paper accepted at the Oxford Bioinformatics Journal.

Oxford Bioinformatics Journal is a bi-weekly peer-reviewed scientific journal that focuses on genome bioinformatics and computational biology. The journal is leading its field, and publishes scientific papers that are relevant to academic and industrial researchers.

Here is the pre-print of the accepted paper with its abstract:

Abstract: Knowledge graph embeddings (KGEs) have received significant attention in other domains due to their ability to predict links and create dense representations for graphs’ nodes and edges. However, the software ecosystem for their application to bioinformatics remains limited and inaccessible for users without expertise in programming and machine learning. Therefore, we developed BioKEEN (Biological KnowlEdge EmbeddiNgs) and PyKEEN (Python KnowlEdge EmbeddiNgs) to facilitate their easy use through an interactive command line interface. Finally, we present a case study in which we used a novel biological pathway mapping resource to predict links that represent pathway crosstalks and hierarchies. Availability: BioKEEN and PyKEEN are open source Python packages publicly available under the MIT License at https://github.com/SmartDataAnalytics/BioKEEN and https://github.com/SmartDataAnalytics/PyKEEN as well as through PyPI.

 

Acknowledgement

We thank our partners from the Bio2Vec, MLwin, and SimpleML projects for their assistance. This research was supported by Bio2Vec project (http://bio2vec.net/, CRG6 grant 3454) with funding from King Abdullah University of Science and Technology (KAUST).

Papers accepted at AAAI / CompexQA & RecNLP Workshops

AAAIWe are very pleased to announce that our group got two papers got accepted for presentation at the Thirty-First The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) workshops (ComplexQA 2019 and RecNLP 2019), which will be held January 27 – February 1, 2019 at the Hilton Hawaiian Village, Honolulu, Hawaii, USA.

The purpose of the Association for the Advancement of Artificial Intelligence (AAAI) conference series is to promote research in artificial intelligence (AI) and foster scientific exchange between researchers, practitioners, scientists, students, and engineers in AI and its affiliated disciplines.

Reasoning for Complex Question Answering Workshop is a new series of workshops on the reasoning for complex question answering (QA). QA has become a crucial application problem in evaluating the progress of AI systems in the realm of natural language processing and understanding, and to measure the progress of machine intelligence in general. The computational linguistics communities (ACL, NAACL, EMNLP et al.) have devoted significant attention to the general problem of machine reading and question answering, as evidenced by the emergence of strong technical contributions and challenge datasets such as SQuAD. However, most of these advances have focused on “shallow” QA tasks that can be tackled very effectively by existing retrieval-based techniques. Instead of measuring the comprehension and understanding of the QA systems in question, these tasks test merely the capability of a technique to “attend” or focus attention on specific words and pieces of text. The main aim of this workshop is to bring together experts from the computational linguistics (CL) and AI communities to: (1) catalyze progress on the CQA problem, and create a vibrant test-bed of problems for various AI sub-fields; and (2) present a generalized task that can act as a harbinger of progress in AI.

Recommender Systems Meet Natural Language Processing (RecNLP) is an interdisciplinary workshop covering the intersection between Recommender Systems (RecSys) and Natural Language Processing (NLP). The primary goal of RecNLP is to identify common ideas and techniques that are being developed in both disciplines, and to further explore the synergy between the two and to bring together researchers from both domains to encourage and facilitate future collaborations.

Here is the pre-print of the accepted papers with their abstract:

Abstract: Translating natural language to SQL queries for table-based question answering is a challenging problem and has received significant attention from the research community. In this work, we extend a pointer-generator network and investigate how query decoding order matters in semantic parsing for SQL. Even though our model is a straightforward extension of a general-purpose pointer-generator, it outperforms early work for WikiSQL and remains competitive to concurrently introduced, more complex models. Moreover, we provide a deeper investigation of the potential “order-matters” problem due to having multiple correct decoding paths, and investigate the use of REINFORCE as well as a non-deterministic oracle in this context.

Abstract: Discovering relevant research collaborations is crucial for performing extraordinary research and promoting the careers of scholars. Therefore, building recommender systems capable of suggesting relevant collaboration opportunities is of huge interest. Most of the existing approaches for collaboration and co-author recommendation focus on semantic similarities using bibliographic metadata such as publication counts, and citation network analysis.  These approaches neglect relevant and important metadata information such as author affiliation and conferences attended, affecting the quality of the recommendations. To overcome these drawbacks, we formulate the task of scholarly recommendation as a link prediction task based on knowledge graph embeddings. A knowledge graph containing scholarly metadata is created and enriched with textual descriptions. We tested the quality of the recommendations based on the TransE, TranH and DistMult models that consider only triples in the knowledge graph and DKRL which in addition incorporates natural language descriptions of entities during training.

 


 

Looking forward to seeing you at The AAAI-19.

Dr. Katherine Thornton visits SDA

ThorntonHeadDr. Katherine Thornton from Yale University LibraryNew Haven, Connecticut, US visited the SDA group on November 28, 2018.

Katherine Thornton is an information scientist at the Yale University Library working on creating metadata as linked open data. Katherine earned a PhD in Information Science from the University of Washington in 2016 and works on the Scaling Emulation as a Service Infrastructure (EaaSI) project describing the software and configured environments in Wikidata. Katherine has been a volunteer contributor to the Wikidata project since 2012.

Dr. Thornton was invited to give a talk on “Sharing RDF data models and validating RDF graphs with ShEx“ and “Documenting and preserving programming languages and software in Wikidata” at the SWIB conference (Semantic Web in Libraries). SWIB conference is an annual conference, being held for the 10th time, focusing on Linked Open Data (LOD) in libraries and related organizations. It is well established as an event where IT staff, developers, librarians, and researchers from all over the world meet and mingle and learn from each other. The topics of talks and workshops at SWIB revolve around opening data, linking data and creating tools and software for LOD production scenarios. These areas of focus are supplemented by presentations of research projects in applied sciences, industry applications, and LOD activities in other areas.

At the bi-weekly “SDA colloquium presentations” she gave a talk on “Wikidata for Digital Preservation” and describe the workflow of creating the metadata for resources in the domain of computing using the Wikidata platform. While reusing these URIs in metadata to describe pre-configured emulated computing environments in which users can interact with legacy software. She introduced this project in the context of current work at Yale University Library to provide Emulation as a Service. Afterwords, she discussed her data curation work in Wikidata as well as the Wikidata for Digital Preservation portal available at wikidp.org. WikiDP is a streamlined interface for the digital preservation community to interact with Wikidata. The system is available online at http://wikidp.org.

The goal of Dr. Thornton’s visit was to exchange experience and ideas on digital preservation using RDF technologies. In addition to presenting various use-cases where these technologies have applied, Dr. Thornton shared with our group future research problems and challenges related to this research area. During the meeting, SDA core research topics and main research projects were presented and we investigated suitable topics for future collaborations with Dr. Thornton and her research group.

Paper accepted at Nature Scientific Data Journal

scientific_data_white_largerWe are very pleased to announce that our group got a paper accepted at the Scientific Data journal – an open access publication from the Nature Research for the descriptions of scientifically valuable datasets.

Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.  Scientific Data is a peer-reviewed, open-access journal for descriptions of scientifically valuable datasets, and research that advances the sharing and reuse of scientific data. It covers a broad range of research disciplines, including descriptions of big or small datasets, from major consortiums to single research groups. Scientific Data primarily publishes Data Descriptors, a new type of publication that focuses on helping others reuse data, and crediting those who share.

Here is the accepted paper with its abstract:

Abstract: Patents are widely used to protect intellectual property and a measure of innovation output. Each year, the USPTO grants over 150,000 patents to individuals and companies all over the world.  In fact, there were more than 280,000 patent grants issued in the US in 2015. However, accessing, searching and analyzing those patents is often still cumbersome and inefficient. To overcome those problems, Google indexes patents and converts them to Extensible Markup Language (XML) files using Optical Character Recognition (OCR) techniques. In this article, we take this idea one step further and provide semantically rich, machine-readable patents using the Linked Data principles. We have converted the data spanning 12 years – i.e. 2005 – 2017 from XML to Resource Description Framework (RDF) format, conforming to the Linked Data principles and made them publicly available for re-use. This data can be integrated with other data sources in order to further simplify use cases such as trend analysis, structured patent search & exploration and societal progress measurements. We describe the conversion, publishing, interlinking process along with several use cases for the USPTO Linked Patent data.

Prof. Dr. John Domingue visits SDA

johnD_cropProf. John Domingue from The Open University (OU), Milton Keynes, UK visited the SDA group on November 7th, 2018.

John Domingue is a full Professor at the Open University and Director of the Knowledge Media Institute in Milton Keynes, focusing on research in the Semantic Web, Linked Data, Services, Blockchain, and Education. He also serves as the President of STI International, a semantics focused networking organization which runs the ESWC conference series.

His current work focuses on how a combination of blockchain and Linked Data technologies can be used to process personal data in a decentralized trusted manner and how this can be applied in the educational domain (see http://blockchain.open.ac.uk/). This work is funded by a number of projects. The Institute of Coding is a £20M funded UK initiative which aims to increase the graduate computing skills base in the UK. As leader of the first of five project Themes John Domingue is focusing on the use of blockchain micro-accreditation to support the seamless transition of learners between UK universities and UK industry. From January 2019, he will play a leading role in the EU funded QualiChain project which has the aim of revolutionizing public education and its relationship to the labor market and policy-making by disrupting the way accredited educational titles and other qualifications are archived, managed, shared and verified, taking advantage of blockchain, semantics, data analytics and gamification technologies.

From January 2015 to January 2018 he served as the Project Coordinator for the European Data Science Academy which aimed to address the skills gap in data science across Europe. The project was a success leading to a number of outcomes including a combined data science skills and courses portal enabling learners to find jobs across Europe which match their qualifications.

Prof. Domingue was invited to give a talk “Towards the Decentralisation of Personal Data through Blockchains and Linked Data“ at the Computer Science Colloquium at the University of Bonn co-organized by SDA.

 

At the bi-weekly “SDA colloquium presentations” he presented KMi and the main research topics of the institute. The goal of Prof. Domingue’s visit was to exchange experience and ideas on decentralized applications using blockchains technologies in combination with Linked Data. In addition to presenting various use-cases where blockchains and linked data technologies have helped communities to get useful insights, Prof. Dr. Domingue shared with our group future research problems and challenges related to this research area. During the meeting, SDA core research topics and main research projects were presented and we investigated suitable topics for future collaborations with Prof. Domingue and his research group.

Papers accepted at JURIX 2018

header-jurix-150dpi1We are very pleased to announce that our group got one paper accepted for presentation at The 31st international conference on Legal Knowledge and Information Systems (JURIX 2018) conference, which will be held on December 12–14, 2018 in Groningen, The Netherland.

JURIX organizes yearly conferences on the topic of Legal Knowledge and Information Systems. The proceedings of the conferences are published in the Frontiers of Artificial Intelligence and Applications series of IOS Press.

The JURIX conference attracts a wide variety of participants, coming from the government, academia, and business. It is accompanied by workshops on topics ranging from eGovernment, legal ontologies, legal XML, alternative dispute resolution (ADR), argumentation, deontic logic, etc.

Here is the accepted paper with its abstract:

  • A Question Answering System on Regulatory Documentsby Diego Collarana, Timm Heuss, Jens Lehmann, Ioanna Lytra, Gaurav Maheshwari, Rostislav Nedelchev, Thorsten Schmidt, Priyansh Trivedi.
    Abstract: In this work, we outline an approach for question answering over regulatory documents. In contrast to traditional means to access information in the domain, the proposed system attempts to deliver an accurate and precise answer to user queries. This is accomplished by a two-step approach which first selects relevant paragraphs given a question; and then compares the selected paragraph with user query to predict a span in the paragraph as the answer. We employ neural network-based solutions for each step and compare them with existing, and alternate baselines. We perform our evaluations with a gold-standard benchmark comprising over 600 questions on the MaRisk regulatory document. In our experiments, we observe that our proposed system outperforms other baselines.

Acknowledgment
This research was partially supported by an EU H2020 grant provided for the WDAqua project (GA no. 642795).


Looking forward to seeing you at the JURIX 2018.