Keynote and Invited talk

 

Fausto Giunchiglia
Fausto Giunchiglia 
Website | Google Scholar 
 

Title: Stratified Data integration 

Abstract: In this talk, we will show a general stratified approach which allows to take generic data, possibly in multiple languages and integrate them in a single knowledge graph. In the talk, we will also discuss an application of the methodology in the Health Domain. 

Amit Sheth
Amit Sheth 
Website | Google Scholar 
 

Title: Semantics of the Black-Box: Using knowledge-infused learning approach to make AI systems more interpretable and explainable 

Abstract: The recent series of deep learning innovations have shown enormous potential to impact individuals and society, both positively and negatively. The deep learning models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of deep learning models and their over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for the system’s interpretability and explainability. Furthermore, deep learning methods have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computatonal knowledge. Rapid advances in our ability to create and reuse structured knowledge as knowledge graphs make this task viable. Incorporating humangenerated or curated knowledge also lends the system meant for expert (e.g., clinical) decision making more acceptable and trustworthy compared to the data-driven systems that worked well in the past for consumer search and recommendations. In this talk, we will outline how knowledge, provided as a knowledge graph, is incorporated into the deep learning methods using knowledge-infused learning. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches and illustrate it with examples relevant to a few domains. 

denny vrandecic
Denny Vrandečić 
Website | Google Scholar
 

Title: Wikidata and Abstract Wikidata 

Abstract: Wikidata is the largest collaboratively edited open knowledge graph in the world. In this talk we describe Wikidata and its state of the art, and its limitation. We also introduce the Abstract Wikipedia project, which will extend the expressivity of Wikidata and the coverage of the Wikipedia projects in a large number of languages. 

PSreenivasaKumar
P. Sreenivasa Kumar 
Website | Google Scholar 
 

Title: Ontologies for Program Analysis 

Abstract: This talk focuses on a novel application of ontologies for the domain of static analysis of programs. We discuss an extensible static analysis framework, called OPAL - Ontology-based Program AnaLysis. It is known that a program can be represented as RDF triples using a programming language-specific ontology. Going further, the OPAL framework enables the formal representation of external knowledge, such as the usage-knowledge of libraries and program-domain knowledge. Utilising this knowledge and the program triples, we compute additional semantic information, called the static trace of the program. The program triples and the semantic triples are together called consolidated program triples. These triples are stored and used to accelerate the execution of client-analyses specified by the end-user. In the OPAL framework, a client-analysis is specified by a set of conjunctive expressions that use SPARQL (W3C RDF query language) queries. We show that the framework is effective for the client-analyses that warrant sound and approximate information. 

Konstantin Todorov
Konstantin Todorov 
Website | Google Scholar 
 

Title: A knowledge graph of controversial claims for predicting claims topics 

Abstract: Expressing opinions and interacting with others on the Web has led to the production of an abundance of online discourse data, such as claims and viewpoints on controversial topics, their sources, and contexts (e.g., events, entities). These data constitute a valuable source of insights for studies into misinformation spread, bias reinforcement, echo chambers or political agenda-setting. While knowledge graphs of today enable data reuse and federation thus improving information retrieval and facilitating research and knowledge discovery in various fields, they do not store information about claims and related online discourse data, making it difficult to access, query and reuse this wealth of information. In my talk, I will present recent work on the construction of ClaimsKG - a knowledge graph of fact-checked controversial claims, which facilitates structured queries about their truth values, authors, dates, journalistic reviews and other kinds of metadata and provides ground truth data for a number of tasks relevant to the analysis of societal debates on the web. I will discuss perspectives on modeling claims in a generalized and contextualized manner, as well as related challenges such as claim disambiguation and the assessment of claim relatedness. I will present preliminary results on learning claim vector representations (graph embeddings) from ClaimsKG and other structured sources, such as DBpedai, and their application for the task of predicting claim topics. 

Parthasarathi Mukhopadhyay
Parthasarathi Mukhopadhyay 
Website | Google Scholar 
 

Title: Library Carpentry: A Journey towards Data Librarianship 

Abstract: The concept of ‘library carpentry” has its origin in “data carpentry” and the very essence of this emerging concept in the domain of library and information science focuses on building software and data skills that are required for handling the challenges of the 21st century. The major skills that are required for library carpentry include but not limited to the following sub-domains - to apply concepts from data science in library tasks; to identify and use best practices in data structures; to learn how to programmatically transform and map data from one form to another; to configure data visualization; and to automate repetitive, error prone tasks. The concept is completely compatible with FAIR (Findable-AccessibleInteroperable-Reusable) principles and covers techniques & tools that are generally not included in LIS courses such as – Regular Expressions (Regex), JSON parsing, Shell scripting, Python scripting, Webscraping, Data wrangling and so on. This keynote speech explores the possibilities of applying some of these techniques in enriching bibliographic datasets; in creating new information services; in improving existing information services; and in supporting informed decision making (to name a few). It attempts to demonstrate that how easily we can use these techniques for an array of library tasks such as - to collect institute-specific publication data; to measure institutional performances in open access; to gather informetrics indicators for institute-specific publications; to convert poor MARC data to rich MARC data; to fetch review ratings for book, to reconcile locally assigned name authority and subject authority datasets with globally available linked open data sets, to generate named entities from bibliographic notes and abstracts etc. 

Clement Jonquet
Clement Jonquet 
Website | Google Scholar 
 

Title: How to use ontology repositories and ontology–based services 

Abstract: We present how to use ontologies (or other semantic resources) through domain specific
ontology repositories such as BioPortal/AgroPortal/EcoPortal. We will cover:
– Ontology selection and recommendation
– How to use an ontology in the repository
– Semantic annotation of text data
– Ontology alignements management
– Automatic access to ontologies within the repositories (SPARQL & REST) 

Armando Stellato
Armando Stellato 
Website | Google scholar 
 

Topic: An Environment for Seamless Modeling of Ontologies, Thesauri and Lexicons 

Abstract: Initially developed by FAO in the context of the NeOn project as a collaborative environment for the development of the Agrovoc thesaurus, later generalized to a SKOS development platform in the context of a collaboration with the University of Rome Tor Vergata, VocBench has, in 2017, reached its third incarnation.
VocBench 3 (or simply, VB3), is the new version of VocBench, funded by the European Commission ISA² programme, and with development managed by the Publications Office of the EU, under contract 10632 (Infeurope S.A.).
In this workshop at SemTech 2020, we will introduce the platform to newcomers and interested users, present the latest forthcoming version of VocBench (9.0) and will guide attendees towards the various functionalities of the platform. Attendees will learn how to import data from spreadsheets, transform it into meaningful information modeled according to semantics standards and maintain it into a powerful and dynamic collaborative environment.
A look ahead for new directions will conclude the talk. 

Ramesh C Gaur
Ramesh C Gaur 
Website | Google Scholar 
 

Title: Open Culture Data: Issues and challenges before Indian Culture and Heritage Institutions 

Abstract