Skip to content

Community effort to build a Neo4j Knowledge Graph (KG) that links heterogeneous data about COVID-19

License

Notifications You must be signed in to change notification settings

padraigmacgabann/covid-19-community

 
 

Repository files navigation

Covid-19-Community

This project is a community effort to build a Neo4j Knowledge Graph (KG) that integrates heterogeneous biomedical and environmental datasets to help researchers analyze the interplay between host, pathogen, the environment, and COVID-19.

Knowledge Graph Schema

This schema shows the Nodes (circles) and their Relationships (arrows) in the COVID-19-Net KG.

The node NodeMetadata(top left) describes nodes and refers to relevant ontologies (e.g., Infectious Disease Ontology). The left side of the schema shows the geographic hierarchy from the world to the city level (> 1000 citizens), as well as PostalCode (US Zip) and US Census Tract. The right side shows COVID-19 case counts and information about the host organisms, pathogen, virus strains, genes, proteins, protein-protein interactions, and publications. Cases and strains are linked to geolocations.

Note, this KG is work in progress and changes frequently.

Browse the Knowledge Graph with the Neo4j Browser

The Knowledge Graph is updated daily between 07:00 and 08:00 UTC.

View of Neo4j Browser showing the result of a query about interactions of the Spike glycoprotein with human host proteins and related publications in PubMedCentral.

You can browse the Knowledge Graph here (click the launch button and follow the instructions below)

Neo4j Browser

Run a Full-text Query

The KG can be searched by locations (geographic locations and cruise ship names) and bioentities (proteins, genes, strains, organisms) using a full-text search. The results contain exact and approximate matches.

Example full-text query: find spike proteins

Query:

CALL db.index.fulltext.queryNodes("bioentities", "spike") YIELD node

Result:

This subgraph shows the results of the full-text search. Five proteins contain the word Spike. Each protein is associated with one or more protein names (synonymes) (only one name is shown here). The Spike glycoprotein in the center is the full-length gene product encoded by the SARS-CoV-2 S gene. The other four proteins are cleavage products (fragments) of the full-length protein.

Example full-text query: find spike proteins - tabular results

The following query returns the names of the matched bioentities and the labels of the nodes (e.g., Protein, ProteinName) sorted by the match score in descending order.

Query:

CALL db.index.fulltext.queryNodes("bioentities", "spike") YIELD node, score
RETURN node.name, labels(node), score

Result:

Run a Cypher Query

Specific Nodes and Relationships in the KG can be searched using the Cypher query language.

Example Cypher query: find viral strains collected in Houston

Query:

MATCH (s:Strain)-[:FOUND_IN]->(l:Location{name: 'Houston'}) RETURN s, l

Result:

This subgraph shows viral strains (green) of the SARS-CoV-2 virus carried by human hosts in Houston (organisms in gray). The strains have several variants (e.g., mutations)(red) in common. Details of the high-lighted variant is shown at the bottom. This variant is a missense mutation in the S gene (S:c.1841gAt>gGt): the base "A" (Adenosine) found in the Wuhan-Hu-1 reference genome NC_45512 was mutated to a "G" (Guanine) at position 23403, resulting in the encoded Spike glycoprotein (QHD43416) to be changed from a "D" (Aspartic acid) to a "G" (Glycine) amino acid at position 614 (QHD43416.1:p.614D>G).

Example Cypher query: aggregate cummulative COVID-19 case numbers at the US state (Admin1) level

Query:

MATCH (o:Outbreak{id: "COVID-19"})<-[:RELATED_TO]-(c:Cases{date: date("2020-07-06")})-[:REPORTED_IN]->(a:Admin2)-[:IN]->(a1:Admin1)
RETURN a1.name as state, sum(c.cummulativeConfirmed) as confirmedCases, sum(c.cummulativeDeaths) as deaths
ORDER BY confirmedCases DESC;

Result:

Note, some cases in the COVID-19 Data Repository by Johns Hopkins University cannot be mapped to a county or state location (e.g., correctional facilities, missing location data). Therefore, the results of this query will underreport the actual number of cases.

Query the Knowledge Graph in Jupyter Notebook

Cypher queries can be run in Jupyter Notebooks to enable reproducible data analyses and visualizations.

You can run the following Jupyter Notebooks in your web browser:

Binder

Once Jupyter Lab launches, navigate to the notebooks/queries directory and run the following notebooks:

Notebook Description
CaseCounts Runs example queries for case counts
Locations Runs example queries for locations
Demographics Runs example queries demographics data from the American Community Survey
Bioentities Runs example queries for bioentities
AnalyzeVariantsSpikeGlycoprotein Analyze SARS-CoV-2 Spike Glycoprotein Variants
... add examples here ...

Data Download, Preparation, and Integration

COVID-19-Net Knowledge Graph is created from publically available resources, including databases, files, and web services. A reproducible workflow, defined in this repository, is used to run a daily update of the knowledge graph. The Jupyter notebooks listed in the table below download, clean, standardize, and integrate data in the form of .csv files for ingestion into the Knowledge Graph. The prepared data files are saved in the NEO4J_HOME/import directory and cached intermediate files are saved in the NEO4J_HOME/import/cache directory. These notebooks are run daily at 07:00 UTC in batch using Papermill with the update script to download the latest data and update the Knowlege Graph.

Notebook Description
00e-GeoNamesCountry Downloads country information from GeoNames.org
00f-GeoNamesAdmin1 Downloads first administrative divisions (State, Province, Municipality) information from GeoNames.org
00g-GeoNamesAdmin2 Downloads second administrative divisions (Counties in the US) information from GeoNames.org
00h-GeoNamesCity Downloads city information (population > 1000) from GeoNames.org
00i-USCensusRegionDivisionState2017 Downloads US regions, divisions, and assigns state FIPS codes from the US Census Bureau
00j-USCensusCountyCity2017 Downloads US County FIPS codes from the US Census Bureau
00k-UNRegion Downloads UN geographic regions, subregions, and intermediate region information from United Nations
00n-Geolocation Downloads longitude, latitude, elevation, and population data from GeoNames.org
01a-NCBIStrain Downloads the SARS-CoV-2 strain data from NCBI
01b-Nextstrain Downloads the SARS-CoV-2 strain metadata from Nextstrain
01c-NCBIRefSeq Downloads the SARS-CoV-2 reference genome, genes, and protein products from NCBI
01d-CNCBStrain Downloads SARS-CoV-2 viral strains and variation data from CNCB (China National Center for Bioinformation) [takes about 12 hours to run the first time, results are cached]
01d-CNCBStrainLocations Standardizes locations for variation data from CNCB (China National Center for Bioinformation)
01e-ProteinProteinInteraction Downloads SARS-CoV-2 - human protein interaction data from IntAct
01h-PMCAccession Downloads PubMed Central articles that mention NCBI and GISAID strains
02a-JHUCases Downloads cummulative confimed cases and deaths from the COVID-19 Data Repository by Johns Hopkins University
02a-JHUCasesLocation Standardizes location data for the COVID-19 Data Repository by Johns Hopkins University
02c-SDHHSACases Downloads cummulative confirmed COVID-19 cases from the County of San Diego, Health and Human Services Agency
03a-USCensusDP05 Downloads demographic data estimates (DP05) from the American Community Survey 5-Year Data (2009-2018)
... Future notebooks that add new data to the knowledge graph

How to run Jupyter Notebook Examples locally

1. Fork this project

A fork is a copy of a repository in your GitHub account. Forking a repository allows you to freely experiment with changes without affecting the original project.

In the top-right corner of this GitHub page, click Fork.

Then, download all materials to your laptop by cloning your copy of the repository, where your-user-name is your GitHub user name. To clone the repository from a Terminal window or the Anaconda prompt (Windows), run:

git clone https://github.com/your-user-name/covid-19-community.git
cd covid-19-community

2. Create a conda environment

The file environment.yml specifies the Python version and all packages required by the tutorial.

conda env create -f environment.yml

Activate the conda environment

conda activate covid-19-community

3. Launch Jupyter Lab

jupyter lab

Navigate to the notebooks/queries directory to run the example Jupyter Notebooks.

How to run the Data Download and Preparation steps locally

Note, the following steps have been implemented for MacOS and Linux only.

Some steps will take a very long time, e.g., notebook 01d-CNCBStrain may take more than 12 hours to run the first time.

Follow steps 1. - 3. from above.

4. Install Neo4j Desktop

Download Neo4j

Then, launch the Neo4j Browser, create an empty database, set the password to "neo4jbinder", and close the database.

5. Set Environment Variable

Add the environment variable NEO4J_HOME with the path to the Neo4j database installation to your .bash_profile file, e.g.

export NEO4J_HOME="/Users/username/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-.../installation-4.0.3"

6. Run Data Download Notebooks

Start Jupyter Lab.

jupyter lab

Navigate to the (notebooks/dataprep/) directory and run all notebooks in alphabetical order to download, clean, standardize and save the data in the NEO4J_HOME/import directory for ingestion into the Neo4j database.

7. Upload Data into a Local Neo4j Database

Afer all data files have been created in step 6, run (notebooks/local/2-CreateKGLocal.ipynb to import the data into your local Neo4j database. Make sure the Neo4j Browser is closed before running the database import!

8. Browse local KG in Neo4j Browser

After step 7 has completed, start the database in the Neo4j Browser to interactively explore the KG or run local queries.

How can you contribute?

  • File an issue to discuss your idea so we can coordinate efforts
  • Help with specific issues
  • Suggest publically accessible data sets
  • Add Jupyter Notebooks with data analyses, maps, and visualizations
  • Report bugs or issues

Citation

Peter W. Rose, David Valentine, Ilya Zaslavsky, COVID-19-Net: Integrating Health, Pathogen and Environmental Data into a Knowledge Graph for Case Tracking, Analysis, and Forecasting. Available online: https://github.com/covid-19-net/covid-19-community (2020).

Please also cite the data providers.

Data Providers

The schema below shows how data sources are integrated into the nodes of the Knowledge Graph.

Acknowledgements

Neo4j provided technical support and organized the community development: "GraphHackers, Let’s Unite to Help Save the World — Graphs4Good 2020".

Students of the UCSD Spatial Data Science course DSC-198: EXPLORING COVID-19 PANDEMIC WITH DATA SCIENCE

Contributors: Kaushik Ganapathy, Braden Riggs, Eric Yu

Project KONQUER team members at UC San Diego and UTHealth at Houston.

Funding

Development of this prototype is in part supported by the National Science Foundation under Award Numbers:

NSF Convergence Accelerator Phase I (RAISE): Knowledge Open Network Queries for Research (KONQUER) (1937136)

NSF RAPID: COVID-19-Net: Integrating Health, Pathogen and Environmental Data into a Knowledge Graph for Case Tracking, Analysis, and Forecasting (2028411)

About

Community effort to build a Neo4j Knowledge Graph (KG) that links heterogeneous data about COVID-19

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.9%
  • Shell 0.1%