Workshops & Tutorials
Financial and Legal Dimensions of Healthcare
Organizers: Barry Smith and William Hogan
The Ontology of Medically Related Social Entities (OMRSE) is being developed to provide a framework for modeling demographic data that is compatible with BFO and with Open Biomedical Ontologies (OBO) Foundry best practices. Recent developments include representations of organizations, roles, facilities, demographic data, enrollment in insurance plans, and data about socio-economic indicators. At the same time, a number of ontology groups have been working on BFO-conformant ontologies in the financial and legal realms, covering phenomena such as insurance or liability at a more general level. The goal of this workshop is to provide a forum in which the contributions of ontologies in these domains can be critically reviewed, and potential opportunities for collaboration identified.
The 6th International Cells in Experimental Life Science Workshop, CELLS 2022
Organizers: Alexander D. Diehl and Yongqun He
The 6th International Cells in Experimental Life Science Workshop, CELLS 2022, will provide a venue for discussions of the application of biomedical ontologies to represent and analyze in vivo and in vitro cell- and cell line-related knowledge and data, including single cell RNA sequencing data and stem cell technologies. Current high throughput methods such as single cell RNA sequencing and flow and mass cytometry are producing a large amount of data related to existing and novel cell types in health and disease. At the same time, experimental approaches such as microscopy, genomics, and metabolomics are expanding understanding of cellular functioning in relation to neighboring cells and the whole organism. Ontologies are being increasingly used as a tool for integrating and analyzing these diverse data types by projects such as HuBMAP, the Human Cell Atlas, and Brain Data Standards. The Cell Ontology (CL) and Cell Line Ontology (CLO) have long been established as reference ontologies in the OBO framework for representing cell type and cell line information, and additional ontologies such as the Gene Ontology, Protein Ontology, and the Ontology for Biomedical Investigation are also important for representing not only experimental data about cell types but also the methods used to produce that data. The workshop will cover the extension of the Cell Ontology (CL) and related ontologies for ontological representation of cell types based on new methodologies and experiments.
CELLS 2022 will take place as a half-day virtual workshop and will be free for all attendees.
Please visit the CELLS website for submission details.
FAIR ontology harmonization and TRUST data interoperability
Organizers: Yu Lin and Gary Berg-Cross
Although ontologies are often developed and used for specific needs within an organization, there is always common knowledge in a specific domain and at a high level, such as space and time, across many domains. To avoid each organization spending time and resources to model and represent that common knowledge, it is desirable to develop some consensus on a range of semantic resources. Many types of these semantic resources exist along a semantic ladder/ spectrum such as structured vocabularies but also the high levels of domain ontologies to support a wide range of use cases. All involve termed concepts whose definitions vary significantly in precision, scope, form & the communities involved in agreement on the meaning. Ontologies are special resources because they emphasize formal term definitions that specify the intended meaning of a term by clarifying and disambiguating natural definitions with necessary and sufficient logical property conditions such as sub-class & part of relations.
They may specify necessary conditions which allow checking whether an instance of data is consistent (and non-circular) with the classes of which it is asserted to be a member. However, in reality, for practical reasons many similar ontologies modeling a similar or even the exact same domain are developed and vary in form and conceptualization so ambiguity still exists.. This results in classic silos of knowledge. OftenBut there are many are too many competing local/domain standard vocabularies and too little agreement and formalization of what they mean.However recent efforts have included work on domain reference ontologies such as the HyFO reference ontology for the hydro domain, which is being developed as a formal logic extension to DOLCE. These are designed to aid broader domain ontology design, as well as identifying gaps and inconsistencies in representations of domain information. In the current big data era, especially during the recent pandemic, sparse and diverse data and/or ontologies developed from different groups should be aligned, integrated or if possible merged in order to provide meaningful analysis and to support Machine Learning or Artificial Intelligence as well as knowledge graphs for decision making. This data and ontology harmonization becomes the key for those big data integration For example, the value of standard vocabularies and ontologies to support research communication, data sharing and interoperability is widely recognized. The Findability, Accessibility, Interoperability, and Reuse (FAIR) principle and the Transparency, Responsibility,User Focus, Sustainability (TRUST) principle have been established and accepted by the global scientific community for digital objects. These two principles are also applied to ontologies and other semantic resources which support ample opportunities for data discovery, trust, sharing, reuse and value of various datasets; as well as enabling wide access to dataset quality information. To accomplish all these goals at the data level FAIR and TRUSTworthy ontology harmonization is an important first step.To do this we need to develop, improve, and disseminate community agreed upon best practices for harmonizing semantic resources at all levels.This workshop aims to look for examples, technologies, and methodologies utilized for developing FAIR ontology and harmonization, as well as to discuss how harmonization across the semantic spectrum will facilitate data interoperability and the TRUST principle.
Workshop on Ontology Tools and Workflows
Organizers: James A. Overton, Charles Tapley Hoyt and Christopher J. Mungall
We propose a workshop on ontology tools and workflows at ICBO 2022, that will complement the software demonstration track by providing a forum for ontology tool developers and maintainers to discuss their software, workflows, and future directions.
11th Vaccine and Drug Ontology Studies (VDOS-2022)
Organizers: Junguk Hur, Cui Tao and Yongqun He
Contact: Cui Tao, firstname.lastname@example.org
Drugs and vaccines have contributed to dramatic improvements in public health worldwide. Over the last decade, there have been efforts in the biomedical ontology community that represents various areas associated with drugs including vaccines that extend existing health and clinical terminology systems (e.g., SNOMED, RxNorm, NDF-RT, and MedDRA), vernacular medical terminologies, and their applications to research and clinical data. This workshop will provide a platform for discussing innovative solutions as well as the challenges in the development and application of biomedical ontologies to representing and analyzing drugs and vaccines, their administration, immune responses induced, adverse events, and similar topics. The workshop will cover two main areas: (i) ontology representation of drugs and vaccines, and (ii) applications of the ontologies in real-world situations – administration, adverse events, etc. Examples of biomedical subject matter in the scope of this workshop: drug components (e.g., drug active ingredients, vaccine antigens, and adjuvants), administration details (e.g., dosage, administration route, and frequency), gene immune responses and pathways, drug-drug or drug-food interactions, and adverse events. Both research and clinical subjects will be covered. We will also focus on computational methods used to study these, for example, literature mining of vaccine/drug-gene interaction networks, meta-analysis of host immune responses, and time event analysis of the pharmacological effects. This workshop is expected to support a deeper understanding of vaccine and drug mechanisms and effects using ontologies. More specific topics will be selected based on attendees’ submissions and interests.
OBO Tutorial: Using and Reusing Ontologies
Organizers: James A. Overton, Rebecca Jackson, Chris Mungall, Nicole Vasilevsky, Nicolas Matentzoglu and Randi Vita
The Open Biological and Biomedical Ontologies (OBO) community includes hundreds of open source scientific ontology projects, committed to shared principles and practices for interoperability and FAIR data. An OBO tutorial has been a regular feature of ICBO for a decade, introducing new and experienced ontology users and developers to ontologies in general, and to current OBO tools and techniques specifically. While ICBO attracts many ontology experts, it also includes an audience of ontology beginners, and of ontology users looking to become ontology developers or to further refine their skills. Our OBO tutorial will help beginner and intermediate ontology users with a combination of theory and hands-on practice.
Food Process Ontology Hackathon
Organizers: Damion Dooley, Tarini Naravane, Matthew Lange, Chen Yang and Hande Küçük McGinty
A food processing discussion group within the Joint Food Ontology Workgroup has proposed a general OBO Foundry compatible food processing ontology and more specific recipe model which is currently focused on the combination of ingredients, devices, and recipe steps. The recipe model needs to be tested out on real-world recipe databases, to develop the kinds of RDF data structures and queries that are of practical use within food science, but which also have direct application in interoperability with other ontology-driven models of nutrient estimation, food related individual and population health, environmental impact of ingredients, and food supply chain traceability. This workshop aims to create RDF views of other recipe database content from the perspective of OBO Foundry ontologies, including FoodOn, ChEBI, and units of measure ontologies. Hacking can take on a number of challenges including Named Entity Recognition of recipe ingredients in free text recipes, to the representation of multi-component foods such as contained in food composition databases (e.g. world.openfoodfacts.org), to the transformation of semi-normalized recipe databases into pure food process ontology form. Additional work on expanding the recipe ontology representation beyond its “lab bench” purview into more nuanced cultural and historical representation of food knowledge is welcome too!
Organizers: Christopher Mungall, Sierra Moxon, Mark Miller, Nomi Harris and Tim Putnam
The Linked data Modeling Language is an object-oriented data modeling framework that aims to bring semantic web standards into a familiar modeling paradigm, simplifying the production of FAIR ontology-ready data. It can be used for schematizing a variety of kinds of data, ranging from simple flat checklist-style standards to complex interrelated normalized data utilizing polymorphism/inheritance. One major benefit to LinkML is that the framework provides not only the modeling components but also the software and tools needed to load, output, and validate data conforming to a LinkML model.
This tutorial, aimed primarily at biocurators and data modelers, will be a hands-on exploration of the LinkML modeling framework including a discussion of its included tools that help the user work with data conforming to a LinkML model. We will guide the attendees through designing a new model that exercises the main LinkML modeling components, using LinkML tools to serialize and validate instances of the new model, and will discuss the many ways to author and maintain a LinkML model.
Data integration is a major challenge in the life sciences, due to heterogeneity, complexity, the proliferation of ad-hoc formats and data structures, and poor compliance with FAIR guidelines. Often, the first step of integrating data from an external source is to figure out the similarities and differences in domain concepts across disciplines. Beyond this task, the technical implementation of the model influences how scientific concepts are related to a data source. An aggregator needs to understand how modeling choices are influenced by technical implementations in addition to scientific domains in order to bring resources together. Real barriers exist because we can’t map models to one another. Examples of groups attempting to solve these data integration problems are found in projects such as the Cancer Research Data Commons, the NCATS Data Translator, NMDC, and the Monarch Initiative.
Ontologies and controlled vocabularies are a necessary part of the solution, providing a formal vocabulary of all the entities in a domain. However, ontologies alone are insufficient, as they are not intended for modeling the data itself. A wide variety of frameworks and methods for modeling data exist, such as JSON-Schema, UML, and relational database schemas, but these are typically tied to an underlying concrete representation and do not make data maximally interoperable.
LinkML is a modeling framework that provides software and tools needed to load, dump, and validate data conforming to a LinkML model.
Prerequisites This workshop is open to all interested participants, and listening through the hands-on aspects is encouraged with or without participating directly. For hands-on training pieces, basic familiarity with running commands (python scripts, bash commands) from the command line (in a terminal) will be assumed. Please complete Lesson0 (TBD) prior to the tutorial session. Please also create a GitHub account if you do not already have one. The OBO Semantic Engineering Training has a nice tutorial on getting started with GitHub. We will also provide a docker container. If docker is preferred, docker or docker desktop should be installed.
Learning Objectives Learn how to author a new LinkML model that exercises some of the main modeling components. Generate JSONSchema, SQL-DDL, Python, and Java classes from the new model. Survey several pre-built ways of authoring models. Learn about DataHarmonizer; a standardized spreadsheet editor and validator that can be used to curate data conforming to a custom LinkML model.