The Edge to Core Data Pipelines for AI/ML solution pattern provides an architecture solution for scenarios in which edge devices generate image data, which must be collected, processed, and stored at the edge before being utilized to train AI/ML models at the core data center or cloud.
This solution pattern contains resources to showcase a full circle continuous motion of data to capture training data, train new ML models, deploy them, serve them, and expose the service for clients to send inference requests.
Important
The solution in this repository utilizes integrations based on Apache Camel K. While Camel K is still active within upstream Apache Camel, Red Hat has shifted its support to a cloud-native approach, focusing on Camel JBang and Kaoto as primary development tools.
As a result, all Camel K instances in this Solution Pattern will transition to the Red Hat build of Apache Camel, aligning with Red Hat's new strategic direction.
Head to the Solution Pattern's home page to get the full context of this demo sources. You can find it following the link below:
- RH OpenShift 4.12.12
- RHODF 4.12.14 provided by Red Hat
- RHOAI 2.8.4 provided by Red Hat
- RHO Pipelines 1.10.4 provided by Red Hat
- AMQ-Streams 2.8.0-0 provided by Red Hat
- AMQ Broker 7.11.7 provided by Red Hat
- Red Hat build of Apache Camel 4
- Camel K 1.10.8 provided by Red Hat
- RH Service Interconnect 1.4.4-rh-1 provided by Red Hat
-
Provision the following RHDP item:
-
Alternatively, if you don't have access to RHDP, ensure you have an OpenShift environment available and install Red Hat OpenShift AI, meeting the pre-requisite product versions (see 'Tested with' section to inspect product versions).
The instructions below assume:
- You either have Docker, Podman or
ansible-playbook
installed on your local environment. - You have provisioned an OCP instance (tested with OCP 4.12 + RHOAI 2.8), using RHDP, and a bastion server is available.
-
Clone this GitHub repository:
git clone https://github.com/brunoNetId/sp-edge-to-cloud-data-pipelines-demo.git
-
Change to root directory of the project.
cd sp-edge-to-cloud-data-pipelines-demo
-
When running with Docker or Podman
-
Configure the
KUBECONFIG
file to use (where kube details are set after login).export KUBECONFIG=./ansible/kube-demo
-
Login into your OpenShift cluster from the
oc
command line.oc login --username="admin" --server=https://(...):6443 --insecure-skip-tls-verify=true
Replace the
--server
url with your own cluster API endpoint. -
Run the Playbook
-
With Docker:
docker run -i -t --rm --entrypoint /usr/local/bin/ansible-playbook \ -v $PWD:/runner \ -v $PWD/ansible/kube-demo:/home/runner/.kube/config \ quay.io/agnosticd/ee-multicloud:v0.0.11 \ ./ansible/install.yaml
-
With Podman:
podman run -i -t --rm --entrypoint /usr/local/bin/ansible-playbook \ -v $PWD:/runner \ -v $PWD/ansible/kube-demo:/home/runner/.kube/config \ quay.io/agnosticd/ee-multicloud:v0.0.11 \ ./ansible/install.yaml
-
-
-
When running with Ansible Playbook (installed on your machine)
-
Login into your OpenShift cluster from the
oc
command line.For example with: \
oc login --username="admin" --server=https://(...):6443 --insecure-skip-tls-verify=true
(Replace the
--server
url with your own cluster API endpoint) -
Set the following property:
TARGET_HOST="[email protected]"
-
Run Ansible Playbook
ansible-playbook -i $TARGET_HOST,ansible/inventory/openshift.yaml ./ansible/install.yaml
-
The default installation deploys the following zones:
edge1
: represents the Edge environment where live inferencing occurs.central
: represents the Core data centre where Models are trained
The Solution Pattern's architecture allows for more Edge environments to be connected to the main data centre, as per the illustration below:
To deploy new Edge environments, use the same commands as above, but adding the following environment parameter:
-e EDGE_NAME=[your-edge-name]
For example, using the following parameter definition:
... ./ansible/install.yaml -e EDGE_NAME=zone2
will create a new namespace edge-zone2
where all the Edge applications and integrations will be deployed.
If you wish to undeploy the demo, use the same commands as above, but with:
./uninstall.yaml
Instead of:
./install.yaml