diff --git a/.htmltest.yml b/.htmltest.yml
index ddb5b5cdc..f7eeb2f0b 100644
--- a/.htmltest.yml
+++ b/.htmltest.yml
@@ -1,2 +1,4 @@
DirectoryPath: public/
-IgnoreDirectoryMissingTrailingSlash: true
\ No newline at end of file
+IgnoreDirectoryMissingTrailingSlash: true
+IgnoreCanonicalBrokenLinks: false
+TestFilesConcurrently: true
\ No newline at end of file
diff --git a/.vale.ini b/.vale.ini
new file mode 100644
index 000000000..339770a3b
--- /dev/null
+++ b/.vale.ini
@@ -0,0 +1,17 @@
+StylesPath = .vale/styles
+
+MinAlertLevel = suggestion
+
+Packages = RedHat, AsciiDoc
+Vocab = OpenShiftDocs
+
+# Ignore files in dirs starting with `.` to avoid raising errors for `.vale/fixtures/*/testinvalid.adoc` files
+[[!.]*.adoc]
+BasedOnStyles = RedHat, AsciiDoc,
+
+# Optional: pass doc attributes to asciidoctor before linting
+#[asciidoctor]
+#openshift-enterprise = YES
+
+# Disabling rules (NO)
+RedHat.ReleaseNotes = NO
diff --git a/content/blog/2021-12-31-medical-diagnosis.md b/content/blog/2021-12-31-medical-diagnosis.md
index 035583283..43de51bcb 100644
--- a/content/blog/2021-12-31-medical-diagnosis.md
+++ b/content/blog/2021-12-31-medical-diagnosis.md
@@ -30,7 +30,7 @@ For a recorded demo deploying the pattern and seeing the dashboards available to
---
-To deploy this pattern, follow the instructions outlined on the [getting-started](https://validatedpatterns.io/medical-diagnosis/getting-started/) page.
+To deploy this pattern, follow the instructions outlined on the [Getting started](/patterns/medical-diagnosis/med-getting-started/) page.
### What's happening?
diff --git a/content/learn/importing-a-cluster.adoc b/content/learn/importing-a-cluster.adoc
index 36f16e71e..559c14f7d 100644
--- a/content/learn/importing-a-cluster.adoc
+++ b/content/learn/importing-a-cluster.adoc
@@ -112,7 +112,7 @@ If you use the command line tools above you need to explicitly indicate that the
We do this by adding the label referenced in the managedSite's `clusterSelector`.
-1. Find the new cluster.
+. Find the new cluster.
+
[source,terminal]
@@ -120,7 +120,7 @@ We do this by adding the label referenced in the managedSite's `clusterSelector`
oc get managedclusters.cluster.open-cluster-management.io
----
-1. Apply the label.
+. Apply the label.
+
[source,terminal]
diff --git a/content/learn/vault.adoc b/content/learn/vault.adoc
index f7c6317e0..4a8ea98ee 100644
--- a/content/learn/vault.adoc
+++ b/content/learn/vault.adoc
@@ -14,12 +14,12 @@ include::modules/comm-attributes.adoc[]
= Deploying HashiCorp Vault in a validated pattern
[id="prerequisites"]
-= Prerequisites
+== Prerequisites
You have deployed/installed a validated pattern using the instructions provided for that pattern. This should include setting having logged into the cluster using `oc login` or setting you `KUBECONFIG` environment variable and running a `./pattern.sh make install`.
[id="setting-up-hashicorp-vault"]
-= Setting up HashiCorp Vault
+== Setting up HashiCorp Vault
Any validated pattern that uses HashiCorp Vault already has deployed Vault as part of the `./pattern.sh make install`. To verify that Vault is installed you can first see that the `vault` project exists and then select the Workloads/Pods:
diff --git a/content/patterns/ansible-edge-gitops/installation-details.md b/content/patterns/ansible-edge-gitops/installation-details.md
index 792629e37..86ffe5b7e 100644
--- a/content/patterns/ansible-edge-gitops/installation-details.md
+++ b/content/patterns/ansible-edge-gitops/installation-details.md
@@ -93,7 +93,7 @@ OpenShift GitOps is central to this pattern as it is responsible for installing
# ODF (OpenShift Data Foundations)
-ODF is the storage framework that is needed to provide resilient storage for OpenShift Virtualization. It is managed via the helm chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/openshift-data-foundations). This is basically the same chart that our Medical Diagnosis pattern uses (see [here](/patterns/medical-diagnosis/getting-started/) for details on the Medical Edge pattern's use of storage).
+ODF is the storage framework that is needed to provide resilient storage for OpenShift Virtualization. It is managed via the helm chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/openshift-data-foundations). This is basically the same chart that our Medical Diagnosis pattern uses (see [here](/patterns/medical-diagnosis/med-getting-started/) for details on the Medical Edge pattern's use of storage).
Please note that this chart will create a Noobaa S3 bucket named nb.epoch_timestamp.cluster-domain which will not be destroyed when the cluster is destroyed.
diff --git a/content/patterns/medical-diagnosis/_index.adoc b/content/patterns/medical-diagnosis/_index.adoc
index ad77c1b49..77c5cfba8 100644
--- a/content/patterns/medical-diagnosis/_index.adoc
+++ b/content/patterns/medical-diagnosis/_index.adoc
@@ -22,84 +22,14 @@ ci: medicaldiag
:toc:
:imagesdir: /images
:_content-type: ASSEMBLY
-include::modules/comm-attributes.adoc[]
-
-//Module to be included
-//:_content-type: CONCEPT
-//:imagesdir: ../../images
-[id="about-med-diag-pattern"]
-= About the {med-pattern}
-
-Background::
-
-This validated pattern is based on a demo implementation of an automated data pipeline for chest X-ray analysis that was previously developed by {redhat}. You can find the original demonstration link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs.
-
-This validated pattern includes the same functionality as the original demonstration. The difference is that this solution uses the GitOps framework to deploy the pattern including Operators, creation of namespaces, and cluster configuration. Using GitOps provides an efficient means of implementing continuous deployment.
-
-Workflow::
-
-* Ingest chest X-rays from a simulated X-ray machine and puts them into an `objectStore` based on Ceph.
-* The `objectStore` sends a notification to a Kafka topic.
-* A KNative Eventing listener to the topic triggers a KNative Serving function.
-* An ML-trained model running in a container makes a risk assessment of Pneumonia for incoming images.
-* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed, anonymized, and full metrics collected from Prometheus.
-
-This pipeline is showcased link:https://www.youtube.com/watch?v=zja83FVsm14[in this video].
-
-image::medical-edge/dashboard.png[link="/images/medical-edge/dashboard.png"]
-
-//[NOTE]
-//====
-//This validated pattern is still under development. If you have any questions or concerns contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio].
-//====
-
-[id="about-solution-med"]
-== About the solution elements
-
-The solution aids the understanding of the following:
-* How to use a GitOps approach to keep in control of configuration and operations.
-* How to deploy AI/ML technologies for medical diagnosis using GitOps.
-
-The {med-pattern} uses the following products and technologies:
-
-* {rh-ocp} for container orchestration
-* {rh-gitops}, a GitOps continuous delivery (CD) solution
-* {rh-amq-first}, an event streaming platform based on the Apache Kafka
-* {rh-serverless-first} for event-driven applications
-* {rh-ocp-data-first} for cloud native storage capabilities
-* {grafana-op} to manage and share Grafana dashboards, data sources, and so on
-* S3 storage
-
-[id="about-architecture-med"]
-== About the architecture
-
-[IMPORTANT]
-====
-Presently, the {med-pattern} does not have an edge component. Edge deployment capabilities are planned as part of the pattern architecture for a future release.
-====
-
-image::medical-edge/edge-medical-diagnosis-marketing-slide.png[link="/images/medical-edge/edge-medical-diagnosis-marketing-slide.png"]
-
-Components are running on OpenShift either at the data center, at the medical facility, or public cloud running OpenShift.
-
-[id="about-physical-schema-med"]
-=== About the physical schema
-
-The following diagram shows the components that are deployed with the various networks that connect them.
-
-image::medical-edge/physical-network.png[link="/images/medical-edge/physical-network.png"]
-
-The following diagram shows the components that are deployed with the the data flows and API calls between them.
-
-image::medical-edge/physical-dataflow.png[link="/images/medical-edge/physical-dataflow.png"]
+include::modules/comm-attributes.adoc[]
-== Recorded demo
+include::modules/med-about-medical-diagnosis.adoc[leveloffset=+1]
-link:/videos/xray-deployment.svg[image:/videos/xray-deployment.svg[Demo\]]
+include::modules/med-architecture-schema.adoc[leveloffset=+1]
[id="next-steps_med-diag-index"]
== Next steps
-* Getting started link:getting-started[Deploy the Pattern]
-//We have relevant links on the patterns page
+* link:med-getting-started/#med-deploy-pattern[Deploying the Medical Diagnosis pattern]
\ No newline at end of file
diff --git a/content/patterns/medical-diagnosis/cluster-sizing.adoc b/content/patterns/medical-diagnosis/cluster-sizing.adoc
deleted file mode 100644
index 7f4c9584b..000000000
--- a/content/patterns/medical-diagnosis/cluster-sizing.adoc
+++ /dev/null
@@ -1,103 +0,0 @@
----
-title: Cluster Sizing
-weight: 20
-aliases: /medical-diagnosis/cluster-sizing/
----
-
-:toc:
-:imagesdir: /images
-:_content-type: ASSEMBLY
-include::modules/comm-attributes.adoc[]
-
-//Module to be included
-//:_content-type: CONCEPT
-//:imagesdir: ../../images
-[id="about-openshift-cluster-sizing-med"]
-= About OpenShift cluster sizing for the {med-pattern}
-
-To understand cluster sizing requirements for the {med-pattern}, consider the following components that the {med-pattern} deploys on the datacenter or the hub OpenShift cluster:
-
-|===
-| Name | Kind | Namespace | Description
-
-| Medical Diagnosis Hub
-| Application
-| medical-diagnosis-hub
-| Hub GitOps management
-
-| {rh-gitops}
-| Operator
-| openshift-operators
-| {rh-gitops-short}
-
-| {rh-ocp-data-first}
-| Operator
-| openshift-storage
-| Cloud Native storage solution
-
-| {rh-amq-streams}
-| Operator
-| openshift-operators
-| AMQ Streams provides Apache Kafka access
-
-| {rh-serverless-first}
-| Operator
-| - knative-serving (knative-eventing)
-| Provides access to Knative Serving and Eventing functions
-|===
-
-//AI: Removed the following since we have CI status linked on the patterns page
-//[id="tested-platforms-cluster-sizing"]
-//== Tested Platforms
-
-: Removed the following in favor of the link to OCP docs
-//[id="general-openshift-minimum-requirements-cluster-sizing"]
-//== General OpenShift Minimum Requirements
-The minimum requirements for an {ocp} cluster depend on your installation platform. For instance, for AWS, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS], and for bare-metal, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal].
-
-For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.13/installing/installing-preparing.html[{ocp} documentation].
-
-//Module to be included
-//:_content-type: CONCEPT
-//:imagesdir: ../../images
-
-[id="med-openshift-cluster-size"]
-=== About {med-pattern} OpenShift cluster size
-
-The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture.
-
-For {med-pattern}, the OpenShift cluster size must be a bit larger to support the compute and storage demands of OpenShift Data Foundations and other Operators.
-//AI:Removed a few lines from here since the content is updated to remove any ambiguity. We rather use direct links (OCP docs/ GCP/AWS/Azure)
-[NOTE]
-====
-You might want to add resources when more developers are working on building their applications.
-====
-
-The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or more worker nodes.
-
-[cols="^,^,^,^"]
-|===
-| Node type | Number of nodes | Cloud provider | Instance type
-
-| Control plane and worker
-| 3 and 3
-| Google Cloud
-| n1-standard-8
-
-| Control plane and worker
-| 3 and 3
-| Amazon Cloud Services
-| m5.2xlarge
-
-| Control plane and worker
-| 3 and 3
-| Microsoft Azure
-| Standard_D8s_v3
-|===
-
-[role="_additional-resources"]
-.Additional resource
-* link:https://aws.amazon.com/ec2/instance-types/[AWS instance types]
-* link:https://learn.microsoft.com/en-us/azure/virtual-machines/sizes[Azure instance types: Sizes for virtual machines in Azure]
-* link:https://cloud.google.com/compute/docs/machine-resource[Google Cloud Platform instance types: Machine families resource and comparison guide]
-//Removed section for instance types as we did for MCG
diff --git a/content/patterns/medical-diagnosis/getting-started.adoc b/content/patterns/medical-diagnosis/getting-started.adoc
deleted file mode 100644
index 3fff9f7f6..000000000
--- a/content/patterns/medical-diagnosis/getting-started.adoc
+++ /dev/null
@@ -1,397 +0,0 @@
----
-title: Getting Started
-weight: 10
-aliases: /medical-diagnosis/getting-started/
----
-
-:toc:
-:imagesdir: /images
-:_content-type: ASSEMBLY
-include::modules/comm-attributes.adoc[]
-
-//Module to be included
-//:_content-type: PROCEDURE
-//:imagesdir: ../../../images
-[id="deploying-med-pattern"]
-= Deploying the {med-pattern}
-
-.Prerequisites
-
-* An OpenShift cluster
- ** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console].
- ** Select *Services* -> *Containers* -> *Create cluster*.
- ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. See link:../../medical-diagnosis/cluster-sizing[sizing your cluster].
-* A GitHub account and a token for it with repositories permissions, to read from and write to your forks.
-* An S3-capable Storage set up in your public or private cloud for the x-ray images
-* The Helm binary, see link:https://helm.sh/docs/intro/install/[Installing Helm]
-For installation tooling dependencies, see link:https://validatedpatterns.io/learn/quickstart/[Patterns quick start].
-
-[NOTE]
-====
-The {med-pattern} does not have a dedicated hub or edge cluster.
-====
-
-[id="setting-up-an-s3-bucket-for-the-xray-images-getting-started"]
-=== Setting up an S3 Bucket for the xray-images
-
-An S3 bucket is required for image processing.
-For information about creating a bucket in AWS S3, see the <> section.
-
-For information about creating the buckets on other cloud providers, see the following links:
-
-* link:https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html[AWS S3]
-* link:https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal[Azure Blob Storage]
-* link:https://cloud.google.com/storage/docs/quickstart-console[GCP Cloud Storage]
-
-//Module to be included
-//:_content-type: PROCEDURE
-//:imagesdir: ../../../images
-
-[id="utilities"]
-= Utilities
-//AI: Update the use of community and VP post naming tier update
-
-To use the link:https://github.com/validatedpatterns/utilities[utilities] that are available, export some environment variables for your cloud provider.
-
-.Example for AWS. Ensure that you replace values with your keys:
-
-[source,terminal]
-----
-export AWS_ACCESS_KEY_ID=AKXXXXXXXXXXXXX
-export AWS_SECRET_ACCESS_KEY=gkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-----
-
-Create the S3 bucket and copy over the data from the validated patterns public bucket to the created bucket for your demo. You can do this on the cloud providers console or you can use the scripts that are provided in link:https://github.com/validatedpatterns/utilities[utilities] repository.
-
-[source,terminal]
-----
-$ python s3-create.py -b mytest-bucket -r us-west-2 -p
-$ python s3-sync-buckets.py -s validated-patterns-md-xray -t mytest-bucket -r us-west-2
-----
-
-.Example output
-
-image:/videos/bucket-setup.svg[Bucket setup]
-
-Note the name and URL for the bucket for further pattern configuration. For example, you must update these values in a `values-global.yaml` file, where there is a section for `s3:`
-
-[id="preparing-for-deployment"]
-= Preparing for deployment
-.Procedure
-
-. Fork the link:https://github.com/validatedpatterns/medical-diagnosis[medical-diagnosis] repository on GitHub. You must fork the repository because your fork will be updated as part of the GitOps and DevOps processes.
-. Clone the forked copy of this repository.
-+
-[source,terminal]
-----
-$ git clone git@github.com:/medical-diagnosis.git
-----
-
-. Create a local copy of the Helm values file that can safely include credentials.
-+
-[WARNING]
-====
-Do not commit this file. You do not want to push personal credentials to GitHub.
-====
-+
-Run the following commands:
-+
-[source,terminal]
-----
-$ cp values-secret.yaml.template ~/values-secret-medical-diagnosis.yaml
-$ vi ~/values-secret-medical-diagnosis.yaml
-----
-+
-.Example `values-secret.yaml` file
-
-[source,yaml]
-----
-version "2.0"
-secrets:
- # NEVER COMMIT THESE VALUES TO GIT
-
- # Database login credentials and configuration
- - name: xraylab
- fields:
- - name: database-user
- value: xraylab
- - name: database-host
- value: xraylabdb
- - name: database-db
- value: xraylabdb
- - name: database-master-user
- value: xraylab
- - name: database-password
- onMissingValue: generate
- vaultPolicy: validatedPatternDefaultPolicy
- - name: database-root-password
- onMissingValue: generate
- vaultPolicy: validatedPatternDefaultPolicy
- - name: database-master-password
- onMissingValue: generate
- vaultPolicy: validatedPatternDefaultPolicy
-
- # Grafana Dashboard admin user/password
- - name: grafana
- fields:
- - name: GF_SECURITY_ADMIN_USER:
- value: root
- - name: GF_SECURITY_ADMIN_PASSWORD:
- onMissingValue: generate
- vaultPolicy: validatedPatternDefaultPolicy
-----
-+
-By default, Vault password policy generates the passwords for you. However, you can create your own passwords.
-+
-[NOTE]
-====
-When defining a custom password for the database users, avoid using the `$` special character as it gets interpreted by the shell and will ultimately set the incorrect desired password.
-====
-
-. To customize the deployment for your cluster, update the `values-global.yaml` file by running the following commands:
-+
-[source,terminal]
-----
-$ git checkout -b my-branch
-$ vi values-global.yaml
-----
-+
-Replace instances of PROVIDE_ with your specific configuration
-+
-[source,yaml]
-----
- ...omitted
- datacenter:
- cloudProvider: PROVIDE_CLOUD_PROVIDER #AWS, AZURE, GCP
- storageClassName: PROVIDE_STORAGECLASS_NAME #gp3-csi
- region: PROVIDE_CLOUD_REGION #us-east-2
- clustername: PROVIDE_CLUSTER_NAME #OpenShift clusterName
- domain: PROVIDE_DNS_DOMAIN #example.com
-
- s3:
- # Values for S3 bucket access
- # Replace with AWS region where S3 bucket was created
- # Replace and with your OpenShift cluster values
- # bucketSource: "https://s3..amazonaws.com/"
- bucketSource: PROVIDE_BUCKET_SOURCE #validated-patterns-md-xray
- # Bucket base name used for xray images
- bucketBaseName: "xray-source"
-----
-+
-[source,terminal]
-----
-$ git add values-global.yaml
-$ git commit values-global.yaml
-$ git push origin my-branch
-----
-
-. To deploy the pattern, you can use the link:/infrastructure/using-validated-pattern-operator/[{validated-patterns-op}]. If you do use the Operator, skip to <>.
-
-. To preview the changes that will be implemented to the Helm charts, run the following command:
-+
-[source,terminal]
-----
-$ ./pattern.sh make show
-----
-
-. Login to your cluster by running the following command:
-+
-[source,terminal]
-----
-$ oc login
-----
-+
-Optional: Set the `KUBECONFIG` variable for the `kubeconfig` file path:
-+
-[source,terminal]
-----
- export KUBECONFIG=~/
-----
-
-[id="check-the-values-files-before-deployment"]
-== Check the values files before deployment
-
-To ensure that you have the required variables to deploy the {med-pattern}, run the `./pattern.sh make predeploy` command. You can review your values and make updates, if required.
-
-You must review the following values files before deploying the {med-pattern}:
-
-|===
-| Values File | Description
-
-| values-secret.yaml
-| Values file that includes the secret parameters required by the pattern
-
-| values-global.yaml
-| File that contains all the global values used by Helm to deploy the pattern
-|===
-
-[NOTE]
-====
-Before you run the `./pattern.msh make install` command, ensure that you have the correct values for:
-```
-- domain
-- clusterName
-- cloudProvider
-- storageClassName
-- region
-- bucketSource
-```
-====
-
-//image::/videos/predeploy.svg[link="/videos/predeploy.svg"]
-
-//Module to be included
-//:_content-type: PROCEDURE
-//:imagesdir: ../../../images
-[id="med-deploy-pattern_{context}"]
-= Deploy
-
-. To apply the changes to your cluster, run the following command:
-+
-[source,terminal]
-----
-$ ./pattern.sh make install
-----
-+
-If the installation fails, you can go over the instructions and make updates, if required.
-To continue the installation, run the following command:
-+
-[source,terminal]
-----
-$ ./pattern.sh make update
-----
-+
-This step might take some time, especially for the {ocp-data-short} Operator components to install and synchronize. The `./pattern.sh make install` command provides some progress updates during the installation process. It can take up to twenty minutes. Compare your `./pattern.sh make install` run progress with the following video that shows a successful installation.
-+
-image::/videos/xray-deployment.svg[link="/videos/xray-deployment.svg"]
-
-. Verify that the Operators have been installed.
-.. To verify, in the {ocp} web console, navigate to *Operators* → *Installed Operators* page.
-.. Check that the Operator is installed in the `openshift-operators` namespace and its status is `Succeeded`. Ensure that {ocp-data-short} is listed in the list of installed Operators.
-
-
-//Module to be included
-//:_content-type: PROCEDURE
-//:imagesdir: ../../../images
-[id="using-openshift-gitops-to-check-on-application-progress-getting-started"]
-== Using OpenShift GitOps to check on Application progress
-
-To check the various applications that are being deployed, you can view the progress of the {rh-gitops-short} Operator.
-
-. Obtain the ArgoCD URLs and passwords.
-+
-The URLs and login credentials for ArgoCD change depending on the pattern name and the site names they control. Follow the instructions below to find them, however you choose to deploy the pattern.
-+
-Display the fully qualified domain names, and matching login credentials, for
-all ArgoCD instances:
-+
-[source,terminal]
-----
-ARGO_CMD=`oc get secrets -A -o jsonpath='{range .items[*]}{"oc get -n "}{.metadata.namespace}{" routes; oc -n "}{.metadata.namespace}{" extract secrets/"}{.metadata.name}{" --to=-\\n"}{end}' | grep gitops-cluster`
-CMD=`echo $ARGO_CMD | sed 's|- oc|-;oc|g'`
-eval $CMD
-----
-+
-.Example output
-+
-[source,text]
-----
-NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
-hub-gitops-server hub-gitops-server-medical-diagnosis-hub.apps.wh-medctr.blueprints.rhecoeng.com hub-gitops-server https passthrough/Redirect None
-# admin.password
-xsyYU6eSWtwniEk1X3jL0c2TGfQgVpDH
-NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
-cluster cluster-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com cluster 8080 reencrypt/Allow None
-kam kam-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com kam 8443 passthrough/None None
-openshift-gitops-server openshift-gitops-server-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com openshift-gitops-server https passthrough/Redirect None
-# admin.password
-FdGgWHsBYkeqOczE3PuRpU1jLn7C2fD6
-----
-+
-[IMPORTANT]
-====
-Examine the `medical-diagnosis-hub` ArgoCD instance. You can track all the applications for the pattern in this instance.
-====
-
-. Check that all applications are synchronized. There are thirteen different ArgoCD `applications` that are deployed as part of this pattern.
-
-
-//Module to be included
-//:_content-type: PROCEDURE
-//:imagesdir: ../../../images
-[id="viewing-the-grafana-based-dashboard-getting-started"]
-== Viewing the Grafana based dashboard
-
-. Accept the SSL certificates on the browser for the dashboard. In the {ocp} web console, go to the Routes for project `openshift-storage``. Click the URL for the `s3-rgw`.
-+
-image::medical-edge/storage-route.png[link="/images/medical-edge/storage-route.png"]
-+
-Ensure that you see some XML and not the access denied error message.
-+
-image::medical-edge/storage-rgw-route.png[link="/images/medical-edge/storage-rgw-route.png"]
-
-. While still looking at Routes, change the project to `xraylab-1`. Click the URL for the `image-server`. Ensure that you do not see an access denied error message. You must to see a `Hello World` message.
-+
-image::medical-edge/grafana-routes.png[link="/images/medical-edge/grafana-routes.png"]
-
-. Turn on the image file flow. There are three ways to go about this.
-+
-You can go to the command-line (make sure you have KUBECONFIG set, or are logged into the cluster.
-+
-[source,terminal]
-----
-$ oc scale deploymentconfig/image-generator --replicas=1 -n xraylab-1
-----
-+
-Or you can go to the OpenShift UI and change the view from Administrator to Developer and select Topology. From there select the `xraylab-1` project.
-+
-image::medical-edge/dev-topology.png[link="/images/medical-edge/dev-topology.png"]
-+
-Right-click on the `image-generator` pod icon and select `Edit Pod count`.
-+
-image::medical-edge/dev-topology-menu.png[link="/images/medical-edge/dev-topology-menu.png"]
-+
-Up the pod count from `0` to `1` and save.
-+
-image::medical-edge/dev-topology-pod-count.png[link="/images/medical-edge/dev-topology-pod-count.png"]
-+
-Alternatively, you can have the same outcome on the Administrator console.
-+
-Go to the OpenShift UI under Workloads, select Deploymentconfigs for Project `xraylab-1`.
-Click `image-generator` and increase the pod count to 1.
-+
-image::medical-edge/start-image-flow.png[link="/images/medical-edge/start-image-flow.png"]
-
-
-//Module to be included
-//:_content-type: PROCEDURE
-//:imagesdir: ../../../images
-[id="making-some-changes-on-the-dashboard-getting-started"]
-== Making some changes on the dashboard
-
-You can change some of the parameters and watch how the changes effect the dashboard.
-
-. You can increase or decrease the number of image generators.
-+
-[source,terminal]
-----
-$ oc scale deploymentconfig/image-generator --replicas=2
-----
-+
-Check the dashboard.
-+
-[source,terminal]
-----
-$ oc scale deploymentconfig/image-generator --replicas=0
-----
-+
-Watch the dashboard stop processing images.
-
-. You can also simulate the change of the AI model version - as it's only an environment variable in the Serverless Service configuration.
-+
-[source,terminal]
-----
-$ oc patch service.serving.knative.dev/risk-assessment --type=json -p '[{"op":"replace","path":"/spec/template/metadata/annotations/revisionTimestamp","value":"'"$(date +%F_%T)"'"},{"op":"replace","path":"/spec/template/spec/containers/0/env/0/value","value":"v2"}]'
-----
-+
-This changes the model version value, and the `revisionTimestamp` in the annotations, which triggers a redeployment of the service.
diff --git a/content/patterns/medical-diagnosis/ideas-for-customization.adoc b/content/patterns/medical-diagnosis/ideas-for-customization.adoc
deleted file mode 100644
index fba7350e2..000000000
--- a/content/patterns/medical-diagnosis/ideas-for-customization.adoc
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Ideas for customization
-weight: 50
-aliases: /medical-diagnosis/ideas-for-customization/
----
-:toc:
-:imagesdir: /images
-:_content-type: ASSEMBLY
-include::modules/comm-attributes.adoc[]
-
-//Module to be included
-//:_content-type: CONCEPT
-//:imagesdir: ../../images
-
-[id="about-customizing-pattern-med"]
-= About customizing the pattern {med-pattern}
-
-One of the major goals of the {solution-name-upstream} development process is to create modular and customizable demos. The {med-pattern} is just an example of how AI/ML workloads built for object detection and classification can be run on OpenShift clusters. Consider your workloads for a moment - how would your workload best consume the pattern framework? Do your consumers require on-demand or near real-time responses when using your application? Is your application processing images or data that is protected by either Government Privacy Laws or HIPAA?
-The {med-pattern} can answer the call to either of these requirements by using {serverless-short} and {ocp-data-short}.
-
-[id="understanding-different-ways-to-use-med-pattern"]
-== Understanding different ways to use the {med-pattern}
-
-. The {med-pattern} is scanning X-Ray images to determine the probability that a patient might or might not have Pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that use object detection and classification. For example, the pattern could be used to scan C/T images for anomalies in the body such as Sepsis, Cancer, or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease, and bowel disorders like Crohn's disease.
-. The Transportation Security Agency (TSA) could use the {med-pattern} in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With Machine Learning Operations (MLOps), the model is constantly training and learning to better detect those items that are dangerous but which are not necessarily metallic, such as a firearm or a knife. The model is also training to dismiss those items that are authorized; ultimately saving passengers from being stopped and searched at security checkpoints.
-. Militaries could use images collected from drones, satellites, or other platforms to identify objects and determine with probability what that object is. For example, the model could be trained to determine a type of ship, potentially its country of origin, and other such identifying characteristics.
-. Manufacturing companies could use the pattern to inspect finished products as they roll off a production line. An image of the item, including using different types of light, could be analyzed to help expose defects before packaging and distributing. The item could be routed to a defect area.
-
-These are just a few ideas to help you understand how you could use the {med-pattern} as a framework for your application.
-
-//We have relevant links on the patterns page
-//AI: Why does this point to AEG though? https://github.com/validatedpatterns/ansible-edge-gitops/issues[Report Bugs]
diff --git a/content/patterns/medical-diagnosis/med-cluster-sizing.adoc b/content/patterns/medical-diagnosis/med-cluster-sizing.adoc
new file mode 100644
index 000000000..49e9e5426
--- /dev/null
+++ b/content/patterns/medical-diagnosis/med-cluster-sizing.adoc
@@ -0,0 +1,15 @@
+---
+title: Cluster sizing
+weight: 20
+aliases: /medical-diagnosis/med-cluster-sizing/
+---
+
+:toc:
+:imagesdir: /images
+:_content-type: ASSEMBLY
+
+include::modules/comm-attributes.adoc[]
+
+include::modules/med-about-cluster-sizing.adoc[leveloffset=+1]
+
+include::modules/med-ocp-cluster-sizing.adoc[leveloffset=+1]
diff --git a/content/patterns/medical-diagnosis/med-getting-started.adoc b/content/patterns/medical-diagnosis/med-getting-started.adoc
new file mode 100644
index 000000000..c3c9b4979
--- /dev/null
+++ b/content/patterns/medical-diagnosis/med-getting-started.adoc
@@ -0,0 +1,49 @@
+---
+title: Getting started
+weight: 10
+aliases: /medical-diagnosis/med-getting-started/
+---
+
+:toc:
+:imagesdir: /images
+:_content-type: ASSEMBLY
+include::modules/comm-attributes.adoc[]
+
+[id="general-prerequisites_{context}"]
+= Prerequisites
+
+* An OpenShift cluster
+ ** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console].
+ ** Select *OpenShift* -> *Clusters* -> *Create cluster*.
+ ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. See link:../../medical-diagnosis/med-cluster-sizing[sizing your cluster].
+* A GitHub account and a token for it with repositories permissions, to read from and write to your forks.
+* An S3-capable Storage set up in your public or private cloud for the x-ray images
+* The Helm binary, see link:https://helm.sh/docs/intro/install/[Installing Helm]
+For installation tooling dependencies, see link:https://validatedpatterns.io/learn/quickstart/[Patterns quick start].
+
+[NOTE]
+====
+The {med-pattern} does not have a dedicated hub or edge cluster.
+====
+
+[id="setting-up-storage-for-xray-images"]
+== Setting up storage for the X-ray images
+
+Setting up storage is required for image processing.For information about creating the buckets on other cloud providers, see the following links:
+
+* link:https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html[AWS S3]
+* link:https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal[Azure Blob Storage]
+* link:https://cloud.google.com/storage/docs/quickstart-console[GCP Cloud Storage]
+
+include::modules/med-setup-aws-s3-bucket-with-utilities.adoc[leveloffset=+2]
+
+include::modules/med-preparing-for-deployment.adoc[leveloffset=+1]
+
+include::modules/med-deploying-med-diag-pattern.adoc[leveloffset=+1]
+
+[id="post-deployment-configuration_{context}"]
+== Post-deployment configuration
+
+include::modules/med-using-ocp-gitops-to-check-app-progress.adoc[leveloffset=+2]
+
+include::modules/med-viewing-grafana-dashboard.adoc[leveloffset=+2]
diff --git a/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc b/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc
new file mode 100644
index 000000000..16763f775
--- /dev/null
+++ b/content/patterns/medical-diagnosis/med-ideas-for-customization.adoc
@@ -0,0 +1,12 @@
+---
+title: Ideas for customization
+weight: 50
+aliases: /medical-diagnosis/med-ideas-for-customization/
+---
+:toc:
+:imagesdir: /images
+:_content-type: ASSEMBLY
+
+include::modules/comm-attributes.adoc[]
+
+include::modules/med-about-customizing-pattern.adoc[leveloffset=+1]
diff --git a/content/patterns/medical-diagnosis/med-troubleshooting.adoc b/content/patterns/medical-diagnosis/med-troubleshooting.adoc
new file mode 100644
index 000000000..8b5ce7c58
--- /dev/null
+++ b/content/patterns/medical-diagnosis/med-troubleshooting.adoc
@@ -0,0 +1,14 @@
+---
+title: Troubleshooting
+weight: 40
+aliases: /medical-diagnosis/med-troubleshooting/
+---
+
+:toc:
+:imagesdir: /images
+:_content-type: REFERENCE
+include::modules/comm-attributes.adoc[]
+
+include::modules/med-about-makefile.adoc[leveloffset=+1]
+
+include::modules/med-troubleshooting-deployment.adoc[leveloffset=+1]
\ No newline at end of file
diff --git a/modules/med-about-cluster-sizing.adoc b/modules/med-about-cluster-sizing.adoc
new file mode 100644
index 000000000..719764c4b
--- /dev/null
+++ b/modules/med-about-cluster-sizing.adoc
@@ -0,0 +1,41 @@
+
+:_content-type: CONCEPT
+:imagesdir: ../../images
+
+[id="about-openshift-cluster-sizing-med"]
+= About OpenShift cluster sizing for the {med-pattern}
+
+The {med-pattern} deploys the following components on the datacenter or the hub OpenShift cluster:
+
+|===
+| Name | Kind | Namespace | Description
+
+| Medical Diagnosis Hub
+| Application
+| medical-diagnosis-hub
+| Hub GitOps management
+
+| {rh-gitops}
+| Operator
+| openshift-operators
+| {rh-gitops-short}
+
+| {rh-ocp-data-first}
+| Operator
+| openshift-storage
+| Cloud Native storage solution
+
+| {rh-amq-streams}
+| Operator
+| openshift-operators
+| AMQ Streams provides Apache Kafka access
+
+| {rh-serverless-first}
+| Operator
+| - knative-serving (knative-eventing)
+| Provides access to Knative Serving and Eventing functions
+|===
+
+//AI: Removed the following since we have CI status linked on the patterns page
+//[id="tested-platforms-cluster-sizing"]
+//== Tested Platforms
\ No newline at end of file
diff --git a/modules/med-about-customizing-pattern.adoc b/modules/med-about-customizing-pattern.adoc
new file mode 100644
index 000000000..53bd2582b
--- /dev/null
+++ b/modules/med-about-customizing-pattern.adoc
@@ -0,0 +1,24 @@
+:_content-type: CONCEPT
+:imagesdir: ../../images
+
+[id="about-customizing-pattern-med"]
+= About customizing the {med-pattern}
+
+One of the major goals of the {solution-name-upstream} development process is to create modular and customizable demos. The {med-pattern} is just an example of how AI/ML workloads built for object detection and classification can be run on OpenShift clusters. Consider your workloads for a moment:
+* How would your workload best consume the pattern framework?
+
+* Do your consumers require on-demand or near real-time responses when using your application?
+
+* Is your application processing images or data that is protected by either Government Privacy Laws or HIPAA?
+
+The {med-pattern} can address either of these requirements by using {serverless-short} and {ocp-data-short}.
+
+[id="understanding-different-ways-to-use-med-pattern"]
+== Understanding different ways to use the {med-pattern}
+
+* The {med-pattern} is scanning X-ray images to determine the probability that a patient might or might not have pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that use object detection and classification. For example, the pattern could be used to scan computed tomography (CT) images for anomalies in the body such as sepsis, cancer, or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease, and bowel disorders like Crohn's disease.
+* The United States Transportation Security Agency (TSA) could use the {med-pattern} in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With Machine Learning Operations (MLOps), the model is constantly training and learning to better detect those items that are dangerous but which are not necessarily metallic, such as a firearm or a knife. The model is also training to dismiss those items that are authorized, which can save passengers from being stopped and searched at security checkpoints.
+* Militaries could use images collected from drones, satellites, or other platforms to identify objects and determine with probability what that object is. For example, the model could be trained to determine a type of ship, potentially its country of origin, and other such identifying characteristics.
+* Manufacturing companies could use the pattern to inspect finished products as they roll off a production line. An image of the item, including using different types of light, could be analyzed to help expose defects before packaging and distributing. The item could be routed to a defect area.
+
+These are just a few ideas to help you understand how you could use the {med-pattern} as a framework for your application.
diff --git a/modules/med-about-makefile.adoc b/modules/med-about-makefile.adoc
new file mode 100644
index 000000000..365fa239f
--- /dev/null
+++ b/modules/med-about-makefile.adoc
@@ -0,0 +1,31 @@
+:_content-type: CONCEPT
+:imagesdir: ../../images
+
+[id="med-understanding-the-makefile-troubleshooting"]
+= Understanding the Makefile
+
+The Makefile is the entrypoint for the pattern. We use the Makefile to bootstrap the pattern to the cluster. After the initial bootstrapping of the pattern, the Makefile isn't required for ongoing operations but can often be useful when you need to make a change to a config within the pattern. Run the `make upgrade` command to refresh the bootstrap resources without having to tear down the pattern or cluster.
+
+[id="about-make-install-make-deploy-command"]
+== About the make install and make deploy commands
+
+Running `make install` within the pattern application triggers a `make deploy` from `/common` directory. This initializes the `common` components of the pattern framework and installs a Helm chart in the `default` namespace. At this point, cluster services, such as {rh-rhacm-first} and {rh-gitops} are deployed.
+
+After you have installed the components from the `common` directory, the pattern runs the remaining tasks within the `make install` target.
+//AI: Check which are these other tasks
+
+[id="about-make-vault-init-make-load-secrets-commands"]
+== About the make vault-init and make load-secrets commands
+
+The {med-pattern} is integrated with {hashicorp-vault} and {eso-op} services for secrets management within the cluster. These targets install vault from a {helm-chart} and load the secret `(values-secret.yaml)` that you created during link:../med-getting-started/#preparing-for-deployment[Getting Started].
+
+If `values-secret.yaml` does not exist, `make` will exit with an error saying so. Furthermore, if the `values-secret.yaml` file does exist but is improperly formatted, {rh-ansible} exits with an error about being improperly formatted. To verify the format of the secret, see link:../med-getting-started/#preparing-for-deployment[Getting Started].
+
+[id="about-make-bootstrap-make-upgrade-commands"]
+== About the make bootstrap and make upgrade commands
+The `make bootstrap` command is the target used for deploying the application specific components of the pattern. It is the final step in the initial `make install` target. You might want to consider running the `make upgrade` command instead of the `make bootstrap` command directly.
+
+Generally, running the `make upgrade` command is required when you encounter errors with the application pattern deployment. For example, if you miss a value and the chart does not rendered correctly, you must execute the run the `make upgrade` command after fixing the value.
+
+Review the `Makefile` for the `common` and `Medical Diagnosis` components, which are located in `common/Makefile` and `./Makefile` respectively.
+
diff --git a/modules/med-about-medical-diagnosis.adoc b/modules/med-about-medical-diagnosis.adoc
new file mode 100644
index 000000000..b6bb1a2f3
--- /dev/null
+++ b/modules/med-about-medical-diagnosis.adoc
@@ -0,0 +1,46 @@
+:_content-type: CONCEPT
+:imagesdir: ../../images
+
+[id="about-med-diag-pattern"]
+= About the {med-pattern}
+
+Background::
+
+This validated pattern is based on a demo implementation of an automated data pipeline for chest X-ray analysis that {redhat} developed for the US Department of Veteran Affairs. You can find the original demonstration link:https://github.com/red-hat-data-services/jumpstart-library[here].
+
+This validated pattern includes the same functionality as the original demonstration. The difference is that this solution uses the GitOps framework to deploy the pattern including Operators, creation of namespaces, and cluster configuration. Using GitOps provides an efficient means of implementing continuous deployment.
+
+Workflow::
+
+* Ingest chest X-rays from a simulated X-ray machine and puts them into an `objectStore` based on Ceph.
+* The `objectStore` sends a notification to a Kafka topic.
+* A KNative Eventing listener to the topic triggers a KNative Serving function.
+* An ML-trained model running in a container makes a risk assessment of Pneumonia for incoming images.
+* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed, anonymized, and full metrics collected from Prometheus.
+
+This pipeline is showcased link:https://www.youtube.com/watch?v=zja83FVsm14[in this video].
+
+image::medical-edge/dashboard.png[link="/images/medical-edge/dashboard.png"]
+
+//[NOTE]
+//====
+//This validated pattern is still under development. If you have any questions or concerns contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio].
+//====
+
+[id="about-solution-med"]
+== About the solution elements
+
+The solution aids the understanding of the following:
+
+* How to use a GitOps approach to keep in control of configuration and operations.
+* How to deploy AI/ML technologies for medical diagnosis using GitOps.
+
+The {med-pattern} uses the following products and technologies:
+
+* {rh-ocp} for container orchestration
+* {rh-gitops}, a GitOps continuous delivery (CD) solution
+* {rh-amq-first}, an event streaming platform based on the Apache Kafka
+* {rh-serverless-first} for event-driven applications
+* {rh-ocp-data-first} for cloud native storage capabilities
+* {grafana-op} to manage and share Grafana dashboards, data sources, and so on
+* Storage, such as AWS S3 buckets
\ No newline at end of file
diff --git a/modules/med-architecture-schema.adoc b/modules/med-architecture-schema.adoc
new file mode 100644
index 000000000..f44328282
--- /dev/null
+++ b/modules/med-architecture-schema.adoc
@@ -0,0 +1,29 @@
+:_content-type: CONCEPT
+:imagesdir: ../../images
+
+[id="about-architecture-med"]
+== About the architecture
+
+[IMPORTANT]
+====
+Presently, the {med-pattern} does not have an edge component. Edge deployment capabilities are planned as part of the pattern architecture for a future release.
+====
+
+image::medical-edge/edge-medical-diagnosis-marketing-slide.png[link="/images/medical-edge/edge-medical-diagnosis-marketing-slide.png"]
+
+Components are running on OpenShift either at the data center, at the medical facility, or public cloud running OpenShift.
+
+[id="about-physical-schema-med"]
+=== About the physical schema
+
+The following diagram shows the components that are deployed with the various networks that connect them.
+
+image::medical-edge/physical-network.png[link="/images/medical-edge/physical-network.png"]
+
+The following diagram shows the components that are deployed with the the data flows and API calls between them.
+
+image::medical-edge/physical-dataflow.png[link="/images/medical-edge/physical-dataflow.png"]
+
+== Recorded demo
+
+link:/videos/xray-deployment.svg[image:/videos/xray-deployment.svg[Demo\]]
diff --git a/modules/med-deploying-med-diag-pattern.adoc b/modules/med-deploying-med-diag-pattern.adoc
new file mode 100644
index 000000000..0ce0db2ad
--- /dev/null
+++ b/modules/med-deploying-med-diag-pattern.adoc
@@ -0,0 +1,28 @@
+:_content-type: PROCEDURE
+:imagesdir: ../../../images
+
+[id="med-deploy-pattern"]
+= Deploying the {med-pattern}
+
+. To apply the changes to your cluster, run the following command:
++
+[source,terminal]
+----
+$ ./pattern.sh make install
+----
++
+If the installation fails, review the instructions and make any required updates.
+To continue the installation, run the following command:
++
+[source,terminal]
+----
+$ ./pattern.sh make update
+----
++
+This step might take up to twenty minutes to complete, especially for the {ocp-data-short} Operator components to install and synchronize. The `./pattern.sh make install` command provides some progress updates during the installation process. It can take up to twenty minutes. Compare your `./pattern.sh make install` run progress with the following video that shows a successful installation.
++
+image::/videos/xray-deployment.svg[link="/videos/xray-deployment.svg"]
+
+. Verify that the Operators have been installed.
+.. To verify, in the {ocp} web console, navigate to *Operators* → *Installed Operators* page.
+.. Check that the Operator is installed in the `openshift-operators` namespace and its status is `Succeeded`. Ensure that {ocp-data-short} is listed in the list of installed Operators.
\ No newline at end of file
diff --git a/modules/med-ocp-cluster-sizing.adoc b/modules/med-ocp-cluster-sizing.adoc
new file mode 100644
index 000000000..c26e70b58
--- /dev/null
+++ b/modules/med-ocp-cluster-sizing.adoc
@@ -0,0 +1,50 @@
+:_content-type: CONCEPT
+:imagesdir: ../../images
+
+[id="med-openshift-cluster-size"]
+== About OpenShift cluster size for the {med-pattern}
+
+The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture.
+
+For {med-pattern}, the OpenShift cluster size must be larger than a standard cluster to support the compute and storage demands of OpenShift Data Foundations and other Operators.
+
+The minimum requirements for an {ocp} cluster depend on your installation platform, for example:
+
+* For AWS, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS].
+
+* For bare-metal, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal].
+
+For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.13/installing/installing-preparing.html[{ocp} documentation].
+
+[NOTE]
+====
+You might want to add resources when more developers are working on building their applications.
+====
+
+The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or more worker nodes.
+
+[cols="^,^,^,^"]
+|===
+| Node type | Number of nodes | Cloud provider | Instance type
+
+| Control plane and worker
+| 3 and 3
+| Google Cloud
+| n1-standard-8
+
+| Control plane and worker
+| 3 and 3
+| Amazon Cloud Services
+| m5.2xlarge
+
+| Control plane and worker
+| 3 and 3
+| Microsoft Azure
+| Standard_D8s_v3
+|===
+
+[role="_additional-resources"]
+.Additional resources
+* link:https://aws.amazon.com/ec2/instance-types/[AWS instance types]
+* link:https://learn.microsoft.com/en-us/azure/virtual-machines/sizes[Azure instance types: Sizes for virtual machines in Azure]
+* link:https://cloud.google.com/compute/docs/machine-resource[Google Cloud Platform instance types: Machine families resource and comparison guide]
diff --git a/modules/med-preparing-for-deployment.adoc b/modules/med-preparing-for-deployment.adoc
new file mode 100644
index 000000000..0c42ca9a0
--- /dev/null
+++ b/modules/med-preparing-for-deployment.adoc
@@ -0,0 +1,169 @@
+:_content-type: PROCEDURE
+:imagesdir: ../../../images
+
+[id="preparing-for-deployment"]
+= Preparing to deploy the {med-pattern}
+
+.Procedure
+
+. Fork the link:https://github.com/validatedpatterns/medical-diagnosis[medical-diagnosis] repository on GitHub. You must fork the repository because your fork will be updated as part of the GitOps and DevOps processes.
+. Clone the forked copy of this repository.
++
+[source,terminal]
+----
+$ git clone git@github.com:/medical-diagnosis.git
+----
+
+. Create a local copy of the Helm values file that can safely include credentials.
++
+[WARNING]
+====
+Do not commit this file. You do not want to push personal credentials to GitHub.
+====
++
+Run the following commands:
++
+[source,terminal]
+----
+$ cp values-secret.yaml.template ~/values-secret-medical-diagnosis.yaml
+$ vi ~/values-secret-medical-diagnosis.yaml
+----
++
+.Example `values-secret.yaml` file
+
+[source,yaml]
+----
+version "2.0"
+secrets:
+ # NEVER COMMIT THESE VALUES TO GIT
+
+ # Database login credentials and configuration
+ - name: xraylab
+ fields:
+ - name: database-user
+ value: xraylab
+ - name: database-host
+ value: xraylabdb
+ - name: database-db
+ value: xraylabdb
+ - name: database-master-user
+ value: xralab
+ - name: database-password
+ onMissingValue: generate
+ vaultPolicy: validatedPatternDefaultPolicy
+ - name: database-root-password
+ onMissingValue: generate
+ vaultPolicy: validatedPatternDefaultPolicy
+ - name: database-master-password
+ onMissingValue: generate
+ vaultPolicy: validatedPatternDefaultPolicy
+
+ # Grafana Dashboard admin user/password
+ - name: grafana
+ fields:
+ - name: GF_SECURITY_ADMIN_USER:
+ value: root
+ - name: GF_SECURITY_ADMIN_PASSWORD:
+ onMissingValue: generate
+ vaultPolicy: validatedPatternDefaultPolicy
+----
++
+By default, Vault password policy generates the passwords for you. However, you can create your own passwords.
++
+[NOTE]
+====
+When defining a custom password for the database users, avoid using the `$` special character because it gets interpreted by the shell and will ultimately set the incorrect password.
+====
+
+. To customize the deployment for your cluster, update the `values-global.yaml` file by running the following commands:
++
+[source,terminal]
+----
+$ git checkout -b my-branch
+$ vi values-global.yaml
+----
++
+Replace instances of PROVIDE_ with your specific configuration
++
+[source,yaml]
+----
+ ...omitted
+ datacenter:
+ cloudProvider: PROVIDE_CLOUD_PROVIDER #AWS, AZURE, GCP
+ storageClassName: PROVIDE_STORAGECLASS_NAME #gp3-csi
+ region: PROVIDE_CLOUD_REGION #us-east-2
+ clustername: PROVIDE_CLUSTER_NAME #OpenShift clusterName
+ domain: PROVIDE_DNS_DOMAIN #example.com
+
+ s3:
+ # Values for S3 bucket access
+ # Replace with AWS region where S3 bucket was created
+ # Replace and with your OpenShift cluster values
+ # bucketSource: "https://s3..amazonaws.com/"
+ bucketSource: PROVIDE_BUCKET_SOURCE #validated-patterns-md-xray
+ # Bucket base name used for xray images
+ bucketBaseName: "xray-source"
+----
++
+Save the values-global.yaml file and commit it to your branch:
++
+[source,terminal]
+----
+$ git add values-global.yaml
+$ git commit values-global.yaml
+$ git push origin my-branch
+----
+
+. To deploy the pattern, you can use the link:/infrastructure/using-validated-pattern-operator/[{validated-patterns-op}]. If you use the Operator to deploy the pattern, skip to the _Verification_ section of this procedure.
+
+. To preview the changes that will be implemented to the Helm charts, run the following command:
++
+[source,terminal]
+----
+$ ./pattern.sh make show
+----
+
+. Login to your cluster by running the following command:
++
+[source,terminal]
+----
+$ oc login
+----
++
+Optional: Set the `KUBECONFIG` variable for the `kubeconfig` file path:
++
+[source,terminal]
+----
+ export KUBECONFIG=~/
+----
+
+.Verification
+
+To ensure that you have the required variables to deploy the {med-pattern}, run the `./pattern.sh make predeploy` command. You can review your values and m0ake any required updates.
+
+You must review the following `values*` files before deploying the {med-pattern}:
+
+|===
+| Values File | Description
+
+| values-secret.yaml
+| Values file that includes the secret parameters required by the pattern
+
+| values-global.yaml
+| File that contains all the global values used by Helm to deploy the pattern
+|===
+
+[NOTE]
+====
+Before you run the `./pattern.msh make install` command, ensure that you have the correct values for:
+```
+- domain
+- clusterName
+- cloudProvider
+- storageClassName
+- region
+- bucketSource
+```
+====
+
+//image::/videos/predeploy.svg[link="/videos/predeploy.svg"]
\ No newline at end of file
diff --git a/modules/med-setup-aws-s3-bucket-with-utilities.adoc b/modules/med-setup-aws-s3-bucket-with-utilities.adoc
new file mode 100644
index 000000000..4291111a6
--- /dev/null
+++ b/modules/med-setup-aws-s3-bucket-with-utilities.adoc
@@ -0,0 +1,32 @@
+:_content-type: PROCEDURE
+:imagesdir: ../../../images
+
+[id="setting-up-s3-bucket-for-xray-images"]
+= Using {solution-name-upstream} utilities to set up AWS S3 bucket
+
+To use the link:https://github.com/validatedpatterns/utilities/tree/main/aws-tools[aws-tools], complete the following steps:
+
+.Procedure
+
+. Export the following environment variables for AWS. Ensure that you replace the values with your keys:
++
+[source,terminal]
+----
+export AWS_ACCESS_KEY_ID=AKXXXXXXXXXXXXX
+export AWS_SECRET_ACCESS_KEY=gkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+----
+
+. Create the S3 bucket and copy over the data from the {solution-name-upstream} public bucket to the created bucket for your demo. You can do this on the cloud providers console or you can use the scripts that are provided in link:https://github.com/validatedpatterns/utilities[utilities] repository:
++
+[source,terminal]
+----
+$ python s3-create.py -b mytest-bucket -r us-west-2 -p
+$ python s3-sync-buckets.py -s validated-patterns-md-xray -t mytest-bucket -r us-west-2
+----
++
+.Example output
+
+image:/videos/bucket-setup.svg[Bucket setup]
+
+Make a note of the name and the URL for the bucket for further pattern configuration. For example, you must update these values in a `values-global.yaml` file, where there is a section for `s3:`.
+
diff --git a/content/patterns/medical-diagnosis/troubleshooting.adoc b/modules/med-troubleshooting-deployment.adoc
similarity index 67%
rename from content/patterns/medical-diagnosis/troubleshooting.adoc
rename to modules/med-troubleshooting-deployment.adoc
index a7b59e0c4..a0a058df6 100644
--- a/content/patterns/medical-diagnosis/troubleshooting.adoc
+++ b/modules/med-troubleshooting-deployment.adoc
@@ -1,44 +1,8 @@
----
-title: Troubleshooting
-weight: 40
-aliases: /medical-diagnosis/troubleshooting/
----
-
-:toc:
-:imagesdir: /images
:_content-type: REFERENCE
-include::modules/comm-attributes.adoc[]
-
-[id="med-understanding-the-makefile-troubleshooting"]
-=== Understanding the Makefile
-
-The Makefile is the entrypoint for the pattern. We use the Makefile to bootstrap the pattern to the cluster. After the initial bootstrapping of the pattern, the Makefile isn't required for ongoing operations but can often be useful when needing to make a change to a config within the pattern by running a `make upgrade` which allows us to refresh the bootstrap resources without having to tear down the pattern or cluster.
-
-[id="about-make-install-make-deploy-troubleshooting"]
-==== About the make install and make deploy commands
-
-Running `make install` within the pattern application triggers a `make deploy` from `/common` directory. This initializes the `common` components of the pattern framework and install a helm chart in the `default` namespace. At this point, cluster services, such as {rh-rhacm-first} and {rh-gitops} are deployed.
-
-After components from the `common` directory are installed, the remaining tasks within the `make install` target run.
-//AI: Check which are these other tasks
-
-[id="make-vault-init-make-load-secrets-troubleshooting"]
-==== About the make vault-init and make load-secrets commands
-
-The {med-pattern} is integrated with {hashicorp-vault} and {eso-op} services for secrets management within the cluster. These targets install vault from a {helm-chart} and load the secret `(values-secret.yaml)` that you created during link:../getting-started/#preparing-for-deployment[Getting Started].
-
-If `values-secret.yaml` does not exist, make will exit with an error saying so. Furthermore, if the `values-secret.yaml` file does exist but is improperly formatted, {rh-ansible} exits with an error about being improperly formatted. To verify the format of the secret, see link:../getting-started/#preparing-for-deployment[Getting Started].
-
-[id="make-bootstrap-make-upgrade-troubleshooting"]
-==== About the make bootstrap and make upgrade commands
-The `make bootstrap` command is the target used for deploying the application specific components of the pattern. It is the final step in the initial `make install` target. You might want to consider running the `make upgrade` command instead of the `make bootstrap` command directly.
-
-Generally, running the `make upgrade` command is required when you encounter errors with the application pattern deployment. For instance, if a value was missed and the chart was not rendered correctly, executing `make upgrade` command after fixing the value would be necessary.
-
-You might want to review the `Makefile` for the `common` and `Medical Diagnosis` components, which are located in `common/Makefile` and `./Makefile` respectively.
+:imagesdir: ../../../images
[id="troubleshooting-the-pattern-deployment-troubleshooting"]
-=== Troubleshooting the Pattern Deployment
+= Troubleshooting the pattern deployment
Occasionally the pattern will encounter issues during the deployment. This can happen for any number of reasons, but most often it is because of either a change within the operator itself or something has changed in the {olm-first} which determines which operators are available in the operator catalog. Generally, when an issue occurs with the {olm-short}, the operator is unavailable for installation. To ensure that the operator is in the catalog, run the following command:
@@ -59,7 +23,7 @@ Use the grafana dashboard to assist with debugging and identifying the issue
'''
Problem:: No information is being processed in the dashboard
-Solution:: Most often this is due to the image-generator deploymentConfig needing to be scaled up. The image-generator by design is *scaled to 0*;
+Solution:: Solution:: Most often this is due to the image-generator deploymentConfig needing to be scaled up. The image-generator by design is *scaled to 0*":
+
[source,terminal]
----
@@ -133,7 +97,7 @@ MariaDB [xraylabdb]> show tables;
3 rows in set (0.000 sec)
----
+
-. Verify the password set in the `values-secret.yaml` is working
+. Verify the password set in the `values-secret.yaml` is working:
+
[source,terminal]
----
@@ -197,4 +161,4 @@ strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134
xray-images xray-cluster 1 1 True
----
-'''
+'''
\ No newline at end of file
diff --git a/modules/med-using-ocp-gitops-to-check-app-progress.adoc b/modules/med-using-ocp-gitops-to-check-app-progress.adoc
new file mode 100644
index 000000000..4d2ad5042
--- /dev/null
+++ b/modules/med-using-ocp-gitops-to-check-app-progress.adoc
@@ -0,0 +1,44 @@
+:_content-type: PROCEDURE
+:imagesdir: ../../../images
+
+[id="using-openshift-gitops-to-check-application-progress"]
+== Using {rh-gitops-short} to check application progress
+
+To check the various applications that are being deployed, you can view the progress of the {rh-gitops-short} Operator.
+
+. Obtain the ArgoCD URLs and passwords.
++
+The URLs and login credentials for ArgoCD change depending on the pattern name and the site names they control. Follow the instructions below to find them, however you choose to deploy the pattern.
++
+Display the fully qualified domain names, and matching login credentials, for
+all ArgoCD instances:
++
+[source,terminal]
+----
+ARGO_CMD=`oc get secrets -A -o jsonpath='{range .items[*]}{"oc get -n "}{.metadata.namespace}{" routes; oc -n "}{.metadata.namespace}{" extract secrets/"}{.metadata.name}{" --to=-\\n"}{end}' | grep gitops-cluster`
+CMD=`echo $ARGO_CMD | sed 's|- oc|-;oc|g'`
+eval $CMD
+----
++
+.Example output
++
+[source,text]
+----
+NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
+hub-gitops-server hub-gitops-server-medical-diagnosis-hub.apps.wh-medctr.blueprints.rhecoeng.com hub-gitops-server https passthrough/Redirect None
+# admin.password
+xsyYU6eSWtwniEk1X3jL0c2TGfQgVpDH
+NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
+cluster cluster-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com cluster 8080 reencrypt/Allow None
+kam kam-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com kam 8443 passthrough/None None
+openshift-gitops-server openshift-gitops-server-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com openshift-gitops-server https passthrough/Redirect None
+# admin.password
+FdGgWHsBYkeqOczE3PuRpU1jLn7C2fD6
+----
++
+[IMPORTANT]
+====
+Examine the `medical-diagnosis-hub` ArgoCD instance. You can track all the applications for the pattern in this instance.
+====
+
+. Check that all applications are synchronized. There are thirteen different ArgoCD `applications` that are deployed as part of this pattern.
diff --git a/modules/med-viewing-grafana-dashboard.adoc b/modules/med-viewing-grafana-dashboard.adoc
new file mode 100644
index 000000000..baf9657bc
--- /dev/null
+++ b/modules/med-viewing-grafana-dashboard.adoc
@@ -0,0 +1,77 @@
+:_content-type: PROCEDURE
+:imagesdir: ../../../images
+
+[id="viewing-the-grafana-based-dashboard-getting-started"]
+= Viewing the Grafana based dashboard
+
+. Accept the SSL certificates on the browser for the dashboard. In the {ocp} web console, go to the Routes for project `openshift-storage``. Click the URL for the `s3-rgw`.
++
+image::medical-edge/storage-route.png[link="/images/medical-edge/storage-route.png"]
++
+Ensure that you see some XML and not the access denied error message.
++
+image::medical-edge/storage-rgw-route.png[link="/images/medical-edge/storage-rgw-route.png"]
+
+. While still looking at Routes, change the project to `xraylab-1`. Click the URL for the `image-server`. Ensure that you do not see an access denied error message. You must to see a `Hello World` message.
++
+image::medical-edge/grafana-routes.png[link="/images/medical-edge/grafana-routes.png"]
+
+. Turn on the image file flow. There are three methods to do this.
++
+--
+* Method 1: Go to the command-line and log into the cluster. Ensure you have exported the `KUBECONFIG` file.
++
+[source,terminal]
+----
+$ oc scale deploymentconfig/image-generator --replicas=1 -n xraylab-1
+----
++
+* Method 2: Go to the {opc} web console and change the view from *Administrator* perspective to *Developer* perspective and select *Topology*. From there select the `xraylab-1` project.
++
+image::medical-edge/dev-topology.png[link="/images/medical-edge/dev-topology.png"]
++
+Right-click on the `image-generator` pod icon and select `Edit Pod count`.
++
+image::medical-edge/dev-topology-menu.png[link="/images/medical-edge/dev-topology-menu.png"]
++
+Increase the pod count from `0` to `1` and save.
++
+image::medical-edge/dev-topology-pod-count.png[link="/images/medical-edge/dev-topology-pod-count.png"]
++
+* Method 3: Go to the {opc} web console and change to the *Administrator* perspective.
++
+Under *Workloads*, select *DeploymentConfigs* for *Project:xraylab-1*.
+Click `image-generator` and increase the pod count to 1.
++
+image::medical-edge/start-image-flow.png[link="/images/medical-edge/start-image-flow.png"]
+--
+
+[id="customizing-dashboard"]
+== Customizing the dashboard
+
+You can change some of the parameters and watch how the changes effect the dashboard.
+
+. To increase or decrease the number of image generators, run the following command:
++
+[source,terminal]
+----
+$ oc scale deploymentconfig/image-generator --replicas=2
+----
++
+Check the dashboard.
++
+[source,terminal]
+----
+$ oc scale deploymentconfig/image-generator --replicas=0
+----
++
+Watch the dashboard stop processing images.
+
+. You can also simulate the change of the AI model version, which is an environment variable in the Serverless Service configuration.
++
+[source,terminal]
+----
+$ oc patch service.serving.knative.dev/risk-assessment --type=json -p '[{"op":"replace","path":"/spec/template/metadata/annotations/revisionTimestamp","value":"'"$(date +%F_%T)"'"},{"op":"replace","path":"/spec/template/spec/containers/0/env/0/value","value":"v2"}]'
+----
++
+This changes the model version value, and the `revisionTimestamp` in the annotations, which triggers a redeployment of the service.