OnToma is a Python package for mapping entities to identifiers using lookup tables. It is optimised for large-scale entity mapping, and is designed to work with PySpark DataFrames.
OnToma supports the mapping of two kinds of entities: labels (e.g. brachydactyly) and ids (e.g. OMIM:112500).
OnToma includes a NER (Named Entity Recognition) module for extracting clean entity names from raw text labels. This is useful when your data contains labels that need preprocessing. Currently, this feature is available for drugs and diseases. To use NER features, see NER Module Documentation.
OnToma currently has modules to generate lookup tables from the following datasources:
- Open Targets disease, target, and drug indices
- Disease curation tables with the
SEMANTIC_TAGandPROPERTY_VALUEfields (e.g. the Open Targets disease curation table) - You can also provide your own curation tables as long as they are compatible with the defined schema
The package features entity normalisation using Spark NLP, where entities in both the lookup table and the input dataframe are normalised to improve entity matching.
Successfully mapped entities may be mapped to multiple identifiers.
OnToma requires OpenJDK 8 or 11 to be installed on your system, as it's a prerequisite for PySpark and Spark-NLP.
Install OpenJDK 8 or 11 using Homebrew:
brew install openjdk@11After installation, you need to set the JAVA_HOME environment variable. Add the following to your shell configuration file (e.g., ~/.zshrc or ~/.bash_profile):
export JAVA_HOME="/opt/homebrew/opt/openjdk@11/libexec/openjdk.jdk/Contents/Home"
export PATH="$JAVA_HOME/bin:$PATH"Reload your shell configuration:
source ~/.zshrcVerify the installation:
java -versionpip install ontomaOnToma requires a Spark session configured to include the Spark NLP library.
from pyspark.sql import SparkSession
from pyspark.conf import SparkConf
# add Spark NLP library to Spark configuration
config = (
SparkConf()
.set("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.3")
)
# create Spark session
spark = SparkSession.builder.config(conf=config).getOrCreate()Here is an example showing how OnToma can be used to map diseases:
First, load data to generate a disease label lookup table:
from ontoma import OnToma, OpenTargetsDisease
disease_index = spark.read.parquet("path/to/disease/index")
disease_label_lut = OpenTargetsDisease.as_label_lut(disease_index)Then, create the OnToma object to be used for mapping entities:
ont = OnToma(
spark = spark,
entity_lut_list = [disease_label_lut]
)Given an input PySpark DataFrame disease_df containing the diseases to be mapped in the column disease_name:
mapped_disease_df = ont.map_entities(
df = disease_df,
result_col_name = "mapped_ids",
entity_col_name = "disease_name",
entity_kind = "label",
type_col = f.lit("DS")
)Mapping results can be found in the column mapped_ids. The results will be in the form of a list of identifiers that the entity is successfully mapped to.
When your drug labels contain dosages, forms, or brand names, use the NER module to extract clean entity names before mapping:
from ontoma.ner.drug import extract_drug_entities
import pyspark.sql.functions as f
# Extract clean drug entities from raw labels
df_extracted = extract_drug_entities(
spark=spark,
df=raw_drug_df,
input_col="raw_drug_label",
output_col="extracted_drugs"
)
# Explode arrays for mapping
df_exploded = df_extracted.select("*", f.explode("extracted_drugs").alias("clean_drug"))
# Map with OnToma
mapped_df = ont.map_entities(
df=df_exploded,
entity_col_name="clean_drug",
entity_kind="label",
type_col=f.lit("drug")
)See NER Module Documentation for more details.
PySpark uses lazy evaluation, meaning transformations are not executed until an action is triggered.
When using the same OnToma object multiple times, it is recommended to specify a cache directory when creating the OnToma object using the cache_dir parameter to avoid re-running the lookup table processing logic on each use.
ont = OnToma(
spark = spark,
entity_lut_list = [disease_label_lut],
cache_dir = "path/to/cache/dir"
)Install development dependencies:
uv sync --devRun all tests:
uv run pytestSkip slow tests (e.g., NER tests that download large models):
uv run pytest -m "not slow"