Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ASCC multi-assembly re-write #65

Open
wants to merge 61 commits into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
4403b94
Updates as part of Version 1 rewrite
DLBPointon Nov 12, 2024
4589922
Updates as part of Version 1 rewrite
DLBPointon Nov 12, 2024
e2dfd3a
Updates as part of Version 1 rewrite
DLBPointon Nov 12, 2024
79f36bf
Updates from rewrite to main repo
DLBPointon Nov 13, 2024
fd29a3d
Updates from rewite
DLBPointon Nov 13, 2024
1366293
Updating test case and CICD
DLBPointon Nov 13, 2024
9bd0beb
Updates from Eeriks docs branch
DLBPointon Nov 13, 2024
0a8d370
Roll back a couple of things
DLBPointon Nov 13, 2024
2c810e9
Formatting updates
DLBPointon Nov 13, 2024
6c49224
Newline
DLBPointon Nov 13, 2024
68f7521
Updates for CICD - addition of the modules.json
DLBPointon Nov 13, 2024
c009601
Updates for CICD
DLBPointon Nov 13, 2024
fdd1b02
Updates for CICD
DLBPointon Nov 13, 2024
91cd465
Updates for CICD
DLBPointon Nov 13, 2024
735c78a
Adding a test function to parse nextflow version and resouce requirem…
DLBPointon Nov 14, 2024
834ceb0
Adding a test function to parse nextflow version and resouce requirem…
DLBPointon Nov 14, 2024
cb574c7
Adding a test function to parse nextflow version and resouce requirem…
DLBPointon Nov 14, 2024
dff1b55
Adding a test function to parse nextflow version and resouce requirem…
DLBPointon Nov 14, 2024
6dd6867
Adding a test function to parse nextflow version and resouce requirem…
DLBPointon Nov 14, 2024
27b9129
Adding a test function to parse nextflow version and resouce requirem…
DLBPointon Nov 14, 2024
fd521dc
Updates
DLBPointon Nov 14, 2024
0612a44
Updates
DLBPointon Nov 14, 2024
a4e87aa
Adding a GitHub test yaml
DLBPointon Nov 14, 2024
5a092d7
Pinning verison of nf-core and bug fixes
DLBPointon Nov 15, 2024
e5024e7
Pinning verison of nf-core and bug fixes
DLBPointon Nov 15, 2024
44c8bd6
Update min nextflow version to test on
DLBPointon Nov 18, 2024
cee7a3c
updates
DLBPointon Nov 20, 2024
fa27ff0
Updating modules and fixing blast_lineage bug
DLBPointon Nov 21, 2024
568d701
Updating modules and fixing blast_lineage bug
DLBPointon Nov 21, 2024
fd00d0f
Fixing CI data
DLBPointon Nov 21, 2024
a89bd83
Fixing CI data
DLBPointon Nov 21, 2024
048671a
Fixing CI data
DLBPointon Nov 21, 2024
2334287
Fixing CI data
DLBPointon Nov 21, 2024
7aac083
Fixing CI data
DLBPointon Nov 21, 2024
302c18e
Updating biopython+numpy images built with anaconda to biopython wo a…
DLBPointon Nov 26, 2024
a28e957
Linting YML was broken
DLBPointon Nov 26, 2024
88d7423
Pre Commit linting
DLBPointon Nov 26, 2024
4f717cd
Pre Commit linting
DLBPointon Nov 26, 2024
6cc000f
Pre Commit linting
DLBPointon Nov 26, 2024
35703e4
Pre Commit linting
DLBPointon Nov 26, 2024
62552c6
Pre Commit linting
DLBPointon Nov 26, 2024
03f6ead
Re-add pipeline download
DLBPointon Nov 26, 2024
7e5f64d
Re-add pipeline download
DLBPointon Nov 26, 2024
f5f2e92
Forgot to point it to the github_testing folder
DLBPointon Nov 26, 2024
c7dd1a9
Forgot to point it to the github_testing folder
DLBPointon Nov 26, 2024
5dd28da
Updating notes in the pipeline and removed the BTK_YAML arg from the …
DLBPointon Nov 27, 2024
3d24a42
Using realpath so we don't have to pass the bam around like a wally
DLBPointon Nov 28, 2024
3b8d923
Updates to pipeline
DLBPointon Nov 29, 2024
0fb7342
Uncomment out the organellar workflow
DLBPointon Nov 29, 2024
d2dbd4a
Lint
DLBPointon Nov 29, 2024
0e6d799
revert blast makeblastdb to original
DLBPointon Nov 29, 2024
8e2ec15
Updated to fix organellar BTK issue
DLBPointon Nov 29, 2024
b05a4a6
Removing local busco dataset call
DLBPointon Dec 2, 2024
e35a762
Updates
DLBPointon Dec 5, 2024
aa241a9
Updating setup-nextflow from 1 to 2
DLBPointon Dec 5, 2024
ca770c9
Updates
DLBPointon Dec 9, 2024
f432b42
Adding functionality for gzipped input
DLBPointon Dec 12, 2024
fa7ebf2
Adding functionality for gzipped input
DLBPointon Dec 12, 2024
8afa3ec
Updates
DLBPointon Dec 12, 2024
4d3f395
Citation for sanger-tol-btk
DLBPointon Dec 12, 2024
4335b1e
Note for further updates
DLBPointon Dec 12, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 8 additions & 3 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
strategy:
matrix:
NXF_VER:
- "22.10.1"
- "24.04.0"
- "latest-everything"
steps:
- name: Get branch names
Expand All @@ -39,7 +39,7 @@ jobs:
uses: actions/checkout@v3

- name: Install Nextflow
uses: nf-core/setup-nextflow@v1
uses: nf-core/setup-nextflow@v2
with:
version: "${{ matrix.NXF_VER }}"

Expand Down Expand Up @@ -71,6 +71,10 @@ jobs:
# Download A fungal test data set that is full enough to show some real output.
run: |
curl https://tolit.cog.sanger.ac.uk/test-data/resources/ascc/asccTinyTest_V2.tar.gz | tar xzf -
pwd
ls

cp /home/runner/work/ascc/ascc/asccTinyTest_V2/assembly/pyoelii_tiny_testfile_with_adapters.fa /home/runner/work/ascc/ascc/asccTinyTest_V2/assembly/Pyoeliiyoelii17XNL_assembly_hap.fa

- name: Temporary ASCC Diamond Data
run: |
Expand Down Expand Up @@ -139,4 +143,5 @@ jobs:
# For example: adding multiple test runs with different parameters
# Remember that you can parallelise this by using strategy.matrix
run: |
nextflow run ./sanger-ascc/${{ steps.branch-names.outputs.current_branch }}/main.nf -profile test,singularity --outdir ./results --include ALL --exclude btk_busco
pwd
nextflow run ./sanger-ascc/${{ steps.branch-names.outputs.current_branch }}/main.nf -profile test,singularity -params-file ./sanger-ascc/${{ steps.branch-names.outputs.current_branch }}/assets/github_testing/github_test.yaml --input ./sanger-ascc/${{ steps.branch-names.outputs.current_branch }}/assets/github_testing/samplesheet.csv --exclude busco_btk --organellar_exclude busco_btk
92 changes: 28 additions & 64 deletions .github/workflows/linting.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,100 +11,64 @@ on:
types: [published]

jobs:
EditorConfig:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4

- uses: actions/setup-node@v4
- name: Set up Python 3.12
uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d # v5
with:
node-version: "20.11.0"
python-version: "3.12"

- name: Install editorconfig-checker
run: npm install -g editorconfig-checker
- name: Install pre-commit
run: pip install pre-commit

- name: Run ECLint check
run: editorconfig-checker -exclude README.md $(find .* -type f | grep -v '.git\|.py\|.md\|json\|yml\|yaml\|html\|css\|work\|.nextflow\|build\|nf_core.egg-info\|log.txt\|Makefile')

Prettier:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- uses: actions/setup-node@v3

- name: Install Prettier
run: npm install -g prettier

- name: Run Prettier --check
run: prettier --check ${GITHUB_WORKSPACE}

PythonBlack:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Check code lints with Black
uses: psf/black@stable

# If the above check failed, post a comment on the PR explaining the failure
- name: Post PR comment
if: failure()
uses: mshick/add-pr-comment@v1
with:
message: |
## Python linting (`black`) is failing

To keep the code consistent with lots of contributors, we run automated code consistency checks.
To fix this CI test, please run:

* Install [`black`](https://black.readthedocs.io/en/stable/): `pip install black`
* Fix formatting errors in your pipeline: `black .`

Once you push these changes the test should pass, and you can hide this comment :+1:

We highly recommend setting up Black in your code editor so that this formatting is done automatically on save. Ask about it on Slack for help!

Thanks again for your contribution!
repo-token: ${{ secrets.GITHUB_TOKEN }}
allow-repeats: false
- name: Run pre-commit
run: pre-commit run --all-files

nf-core:
runs-on: ubuntu-latest
steps:
- name: Check out pipeline code
uses: actions/checkout@v3
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4

- name: Install Nextflow
uses: nf-core/setup-nextflow@v1
uses: nf-core/setup-nextflow@v2

- uses: actions/setup-python@v4
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d # v5
with:
python-version: "3.8"
python-version: "3.12"
architecture: "x64"

- name: read .nf-core.yml
uses: pietrobolcato/[email protected]
id: read_yml
with:
config: ${{ github.workspace }}/.nf-core.yml

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install nf-core
pip install nf-core==${{ steps.read_yml.outputs['nf_core_version'] }}

- name: Run nf-core lint
env:
GITHUB_COMMENTS_URL: ${{ github.event.pull_request.comments_url }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_PR_COMMIT: ${{ github.event.pull_request.head.sha }}
run: nf-core -l lint_log.txt lint --dir ${GITHUB_WORKSPACE} --markdown lint_results.md
- name: Run nf-core lint
env:
GITHUB_COMMENTS_URL: ${{ github.event.pull_request.comments_url }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_PR_COMMIT: ${{ github.event.pull_request.head.sha }}
run: nf-core -l lint_log.txt lint --dir ${GITHUB_WORKSPACE} --markdown lint_results.md

- name: Save PR number
if: ${{ always() }}
run: echo ${{ github.event.pull_request.number }} > PR_number.txt

- name: Upload linting log file artifact
if: ${{ always() }}
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4
with:
name: linting-logs
path: |
lint_log.txt
lint_results.md
PR_number.txt
PR_number.txt {%- endraw %}
33 changes: 23 additions & 10 deletions .nf-core.yml
Original file line number Diff line number Diff line change
@@ -1,22 +1,35 @@
repository_type: pipeline
lint:
files_exist: false
files_unchanged:
files_exist:
- CODE_OF_CONDUCT.md
- assets/nf-core-ascc_logo_light.png
- docs/images/nf-core-ascc_logo_light.png
- docs/images/nf-core-ascc_logo_dark.png
- .github/ISSUE_TEMPLATE/bug_report.yml
- .github/workflows/branch.yml
- .github/ISSUE_TEMPLATE/config.yml
- .github/workflows/awstest.yml
- .github/workflows/awsfulltest.yml
- conf/igenomes.config
files_unchanged:
- LICENCE
- .github/CONTRIBUTING.md
- .github/PULL_REQUEST_TEMPLATE.md
- .github/workflows/branch.yml
- .github/workflows/linting_comment.yml
- assets/email_template.html
- pyproject.toml
- LICENSE
- .github/workflows/linting.yml
- lib/NfcoreTemplate.groovy
- .github/ISSUE_TEMPLATE/bug_report.yml
- CODE_OF_CONDUCT.md
- gitignore
- assets/nf-core-ascc_logo_light.png
- assets/email_template.html
- docs/images/nf-core-ascc_logo_light.png
- docs/images/nf-core-ascc_logo_dark.png
multiqc_config:
- report_comment
nextflow_config:
- manifest.name
- manifest.homePage
multiqc_config: False
nf_core_version: 2.14.1
repository_type: pipeline
template:
prefix: sanger-tol
skip:
- igenomes
37 changes: 31 additions & 6 deletions CITATIONS.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,19 +10,44 @@

## Pipeline tools

- [FastQC](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/)
- [FCS-adaptor](https://github.com/ncbi/fcs/wiki/FCS-adaptor-quickstart.)

> Andrews, S. (2010). FastQC: A Quality Control Tool for High Throughput Sequence Data [Online]. Available online https://www.bioinformatics.babraham.ac.uk/projects/fastqc/.
> Astashyn, Alexander, Eric S. Tvedte, Deacon Sweeney, Victor Sapojnikov, Nathan Bouk, Victor Joukov, Eyal Mozes, et al. 2023. “FCS-Adaptor.” FCS-Adaptor. June 6, 2023.
> ———. 2024. “Rapid and Sensitive Detection of Genome Contamination at Scale with FCS-GX.” Genome Biology 25 (1): 60.

- [MultiQC](https://pubmed.ncbi.nlm.nih.gov/27312411/)
- [Kcounter](https://github.com/apcamargo/kcounter).

> Ewels P, Magnusson M, Lundin S, Käller M. MultiQC: summarize analysis results for multiple tools and samples in a single report. Bioinformatics. 2016 Oct 1;32(19):3047-8. doi: 10.1093/bioinformatics/btw354. Epub 2016 Jun 16. PubMed PMID: 27312411; PubMed Central PMCID: PMC5039924.
> Buchfink, Benjamin, Klaus Reuter, and Hajk-Georg Drost. 2021. “Sensitive Protein Alignments at Tree-of-Life Scale Using DIAMOND.” Nature Methods 18 (4): 366–68.
> Camargo, Antônio. 2020. “Kcounter.” Kcounter. February 17, 2020. https://github.com/apcamargo/kcounter.

- [BlobToolKit](https://github.com/sanger-tol/blobtoolkit).

> Challis, Richard, Edward Richards, Jeena Rajan, Guy Cochrane, and Mark Blaxter. 2020. “BlobToolKit - Interactive Quality Assessment of Genome Assemblies.” G3 10 (4): 1361–74. Diaz, Alexander Ramos, Zaynab Butt, Priyanka Surana, Richard Challis, Sujai Kumar, and Matthieu Muffato. 2023. “BlobToolKit Pipeline.” BlobToolKit Pipeline. May 18, 2023.

- [Tiara](https://github.com/ibe-uw/tiara).

> Karlicki, Michał, Stanisław Antonowicz, and Anna Karnkowska. 2022. “Tiara: Deep Learning-Based Classification System for Eukaryotic Sequences.” Bioinformatics 38 (2): 344–50.

- [Minimap2](https://github.com/lh3/minimap2).

> Li, Heng. 2018. “Minimap2: Pairwise Alignment for Nucleotide Sequences.” Bioinformatics 34 (18): 3094–3100.

- [TensorFlow](https://www.tensorflow.org/)

> Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, et al. 2015. “TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems.”

- [VecScreen](https://manpages.debian.org/testing/ncbi-tools-bin/vecscreen.1.en.html).

> NCBI. 2001. “NCBI VecScreen.” NCBI VecScreen. October 5, 2001.

- [Scikit-Learn]
Pedregosa, F., G. Varoquaux, A. Gramfort, and V. Michel. 2011. “Scikit-Learn: Machine Learning in Python. JMLR 12, 2825–2830 (2011).” Journal of Machine Learning Research 12 (October): 2825–30.

## Software packaging/containerisation tools

- [Anaconda](https://anaconda.com)
- [Conda](https://conda.org/)

> Anaconda Software Distribution. Computer software. Vers. 2-2.4.0. Anaconda, Nov. 2016. Web.
> conda contributors. conda: A system-level, binary package and environment manager running on all major operating systems and platforms. Computer software. https://github.com/conda/conda

- [Bioconda](https://pubmed.ncbi.nlm.nih.gov/29967506/)

Expand Down
41 changes: 25 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[![Cite with Zenodo](http://img.shields.io/badge/DOI-10.5281/zenodo.XXXXXXX-1073c8?labelColor=000000)](https://doi.org/10.5281/zenodo.XXXXXXX)

[![Nextflow](https://img.shields.io/badge/nextflow%20DSL2-%E2%89%A522.10.1-23aa62.svg)](https://www.nextflow.io/)
[![Nextflow](https://img.shields.io/badge/nextflow%20DSL2-%E2%89%A523.04.0-23aa62.svg)](https://www.nextflow.io/)
[![run with conda](http://img.shields.io/badge/run%20with-conda-3EB049?labelColor=000000&logo=anaconda)](https://docs.conda.io/en/latest/)
[![run with docker](https://img.shields.io/badge/run%20with-docker-0db7ed?labelColor=000000&logo=docker)](https://www.docker.com/)
[![run with singularity](https://img.shields.io/badge/run%20with-singularity-1d355c.svg?labelColor=000000)](https://sylabs.io/docs/)
Expand All @@ -10,7 +10,7 @@

## Introduction

**sanger-tol/ascc** is a bioinformatics pipeline that is meant for detecting cobionts and contaminants in genome assemblies. ASCC stands for Assembly Screen for Cobionts and Contaminants. The pipeline aggregates tools such as BLAST, GC and coverage calculation, FCS-adaptor, FCS-GX, VecScreen, BlobToolKit, the BlobToolKit pipeline, Tiara, Kraken, Diamond BLASTX, and kmer counting and with kcounter+scipy. The main outputs are:
**sanger-tol/ascc** is a bioinformatics pipeline that is meant for detecting cobionts and contaminants in genome assemblies. ASCC stands for Assembly Screen for Cobionts and Contaminants. The pipeline was initially made for the Aquatic Symbiosis Genomics project but is now used for more than just that. The pipeline aggregates tools such as BLAST, GC and coverage calculation, FCS-adaptor, FCS-GX, VecScreen, BlobToolKit, the BlobToolKit pipeline, Tiara, Kraken, Diamond BLASTX, and kmer counting and with kcounter+scipy. The main outputs are:

- A CSV table with taxonomic classifications of the sequences from the consitutent tools.
- A BlobToolKit dataset that can contain variables that are not present in BlobToolKit datasets produced by the BlobToolKit pipeline (https://github.com/sanger-tol/blobtoolkit) on its own. For example, ASCC can incorporate FCS-GX results into a BlobToolKit dataset.
Expand All @@ -19,6 +19,8 @@

![sanger-tol/ascc overview diagram](docs/images/ascc_overview_diagram.png)

The pipeline is in a raw state of development and has not yet been thorougly tested. Its components are functional, though, so it possible to run it.

1. Run a selection of processes from the list below (pick any that you think will be useful).

- FCS-GX
Expand All @@ -41,28 +43,36 @@
- CSV table of average coverage per phylum
- Adapter and organellar contamination report files

There is a Biodiversity Genomics Academy video that introduces the ASCC pipeline on Youtube: https://www.youtube.com/watch?v=jrqjbwrg9-c.

## Installation of the databases

Instructions for installing the databases can be found [here](./docs/databases.md).

For testing the pipeline with tiny files, there is a script that downloads a small assembly FASTA file (a fragment of a Plasmodium genome) and small database files. The script can be found [here](./bin/download_tiny_database_test_files.sh). This is just for testing if running the pipeline works without a crash. These database files a database files are just small fragments of real databases, so they are not meant for production runs.
A run with these databases can be done using this test YAML file that specifies the paths to the database files: [tinytest.yaml](./assets/tinytest.yaml). Before use, you may need to edit the paths in the YAML file to replace relative paths with absolute paths.

## Usage

The pipeline uses a YAML file to specify the input file paths and parameters. A description of the YAML file contents is [here](./docs/usage.md).

> **Note**
> If you are new to Nextflow and nf-core, please refer to [this page](https://nf-co.re/docs/usage/installation) on how
> to set-up Nextflow. Make sure to [test your setup](https://nf-co.re/docs/usage/introduction#how-to-run-a-pipeline)
> with `-profile test` before running the workflow on actual data.

<!-- TODO nf-core: Describe the minimum required steps to execute the pipeline, e.g. how to prepare samplesheets.
Explain what rows and columns represent. For instance (please edit as appropriate):

First, prepare a samplesheet with your input data that looks as follows:

`samplesheet.csv`:

```csv
sample,fastq_1,fastq_2
CONTROL_REP1,AEG588A1_S1_L002_R1_001.fastq.gz,AEG588A1_S1_L002_R2_001.fastq.gz
sample,assembly_type,assembly_file
idImaFly1,Primary,assembly_file.fasta
```

Each row represents a fastq file (single-end) or a pair of fastq files (paired end).
Each row represents a an assembled haplotype or organelle of the sample.

-->
The params-input yaml will need to contain the following data will be detailed [here](./docs/usage.md)

Now, you can run the pipeline using:

Expand All @@ -71,7 +81,8 @@ Now, you can run the pipeline using:
```bash
nextflow run sanger-tol/ascc \
-profile <docker/singularity/.../institute> \
--input YAML \
--input samplesheet \
--params-input YAML \
--outdir <OUTDIR> -entry SANGERTOL_ASCC --include ALL
```

Expand All @@ -80,6 +91,10 @@ nextflow run sanger-tol/ascc \
> provided by the `-c` Nextflow option can be used to provide any configuration _**except for parameters**_;
> see [docs](https://nf-co.re/usage/configuration#custom-configuration-files).

## Output

A description of the output files of the pipeline can be found [here](./docs/output.md).

## Credits

sanger-tol/ascc was written by [Eerik Aunin](https://github.com/eeaunin), [Damon Lee Pointon](https://github.com/DLBPointon), [James Torrance](https://github.com/jt8-sanger), [Ying Sims](https://github.com/yumisims) and [Will Eagles](https://github.com/weaglesBio). Pipeline development was supervised by [Shane A. McCarthy](https://github.com/mcshane) and [Matthieu Muffato](https://github.com/muffato).
Expand All @@ -100,9 +115,3 @@ If you would like to contribute to this pipeline, please see the [contributing g
An extensive list of references for the tools used by the pipeline can be found in the [`CITATIONS.md`](CITATIONS.md) file.

This pipeline uses code and infrastructure developed and maintained by the [nf-core](https://nf-co.re) community, reused here under the [MIT license](https://github.com/nf-core/tools/blob/master/LICENSE).

> **The nf-core framework for community-curated bioinformatics pipelines.**
>
> Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.
>
> _Nat Biotechnol._ 2020 Feb 13. doi: [10.1038/s41587-020-0439-x](https://dx.doi.org/10.1038/s41587-020-0439-x).
4 changes: 2 additions & 2 deletions assets/adaptivecard.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"contentType": "application/vnd.microsoft.card.adaptive",
"contentUrl": null,
"content": {
"\$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"\$schema": "https://adaptivecards.io/schemas/adaptive-card.json",
"msteams": {
"width": "Full"
},
Expand Down Expand Up @@ -50,7 +50,7 @@
"title": "Pipeline Configuration",
"card": {
"type": "AdaptiveCard",
"\$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"\$schema": "https://adaptivecards.io/schemas/adaptive-card.json",
"body": [
{
"type": "FactSet",
Expand Down
Loading
Loading