Skip to content

MAGMA - a GPT-style multimodal model that can understand any combination of images and language. NOTE: The freely available model from this repo is only a demo. For the latest multimodal and multilingual models from Aleph Alpha check out our website https://app.aleph-alpha.com

License

Notifications You must be signed in to change notification settings

pleyad/medi-magma

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MAGMA goes Medi: Multimodal Augmentation through Adapter-based Fine-tuning for Radiology Report Generation

Data processing

The scripts used for data preprocessing are:

  • DATA_combined_prepare.py
  • DATA_iuxray_pepare.py
  • DATA_mimix_prepare.py

Training

The training of the two training experiment are were done with the following: *train.py Configs for 1. Training: configs/MAGMA_medi_biomedlm_mimic.yml Configs for 2. Training: configs/MAGMA_medi_biomedlm_mimic.ym

Inference

Script used:

  • medimagma_inference.py

Evaluation

Scripts used:

  • preprocess_predictions.py
  • all_timesteps_evaluate.py

Plots

All materials and code can be found in the plots/ folder.

About

MAGMA - a GPT-style multimodal model that can understand any combination of images and language. NOTE: The freely available model from this repo is only a demo. For the latest multimodal and multilingual models from Aleph Alpha check out our website https://app.aleph-alpha.com

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 84.1%
  • Jupyter Notebook 15.9%