- Python version >= 3.7
- Following dependency packages for fedora/centos for successfully installing modules in virtualenv
- gcc, git, openssl-devel, python3-devel (or similar packages for ubuntu).
- Configure AWS Account credentials when testing with AWS platforms,
check default section in
~/.aws/credentials
for access/secret key check aws-configuration. - oc client binary is installed on your localhost and binary is listed in $PATH (running oc version on terminal should display version > 3.11). Latest client can be downloaded from oc-client.
- For vSphere based installations, terraform and jq should be installed ( terraform version should be 0.11.13 )
- Installation of ovirt-engine-sdk-python requires
curl-config
andlibxml/xmlreader.h
(they are on Fedora, RHEL or CentOS provided by packageslibcurl-devel
andlibxml2-devel
respectively).
There are additional prerequisites if you plan to execute AWS UPI deployments
- Install the
jq
andawscli
system packages
Along with AWS UPI prerequisites we need following
- openshift-dev.pem needs to be available to ocs-ci
- provide ops-mirror.pem in data/ directory ops-mirror.
Since vSphere IPI deployment require access to vCenter, we must add vCenter’s trusted root CA certificates to the system trust before installing an OCP cluster
Follow this procedure to add vCenter’s trusted root CA certificates.
The system sed
package is not compatible with the script used to install AWS
UPI. To resolve this issue, you must install gnu-sed
. You can do this with brew.
brew install gnu-sed
In addition to this, you will need to ensure that gnu-sed
is used instead
of the system sed
. To do this you will need to update your PATH accordingly.
In your shell rc file (~/.bashrc
, ~/.zshrc
, etc.) add the following
line to the end of the file.
export PATH="/usr/local/opt/gnu-sed/libexec/gnubin:$PATH"
It is recommended that you use a python virtual environment to install the necessary dependencies
-
Clone ocs-ci repository from https://github.com/red-hat-storage/ocs-ci via cmd
git clone [email protected]:red-hat-storage/ocs-ci.git
. -
Go to ocs-ci folder
cd ocs-ci
. -
Setup a python 3.7 virtual environment. This is actually quite easy to do now. Use hidden
.venv
or normalvenv
folder for virtual env as we are ignoring this in flake8 configuration in tox.python3.7 -m venv <path/to/venv>
source <path/to/.venv>/bin/activate
-
Upgrade pip and setuptools with
pip install --upgrade pip setuptools
-
Install requirements with
pip install -r requirements.txt
-
Install pre-config to enforce commits sign-offs, flake8 compliance and more
pip install -r requirements-dev.txt
pre-commit install --hook-type pre-commit --hook-type commit-msg
Configure your ocs-ci.yaml and pass it with --ocsci-conf parameter
This file is used to allow configuration around a number of things within ocs-ci.
The default file is in ocs_ci/framework/conf/default_config.yaml
.
The required keys are in the template. Values are placeholders and should be replaced by legitimate values. Values for report portal or polarion are only required if you plan on posting to that particular service.
Move a copy of the template to your conf directory and edit it from there with the proper values and pass it with --ocsci-conf parameter to pytest.
The OCS performance tests need to have an elastic-search server for running benchmarks and for storing the results. If an elastic-search server is not available, the tests can deploy ES in the system under test for the benchmark, and it will dump all results in JSON file.
The support for automated deployment of the Elastic-search server is available only for x86_64 architecture for other architecture (e.g. PPC / s390), since the benchmark can not deploy ES server on the OCP cluster, an ES server need to be deployed in the LAB, and the IP/Port need to be configured in the configuration file.
All elasticsearch configuration done in the ocs_ci/framework/conf/default_config.yaml
at the PERF:
section.
In order to deploy a cluster to AWS with the Openshift Installer,
you will need to download the pull secret for your account.
Download this file from openshift.com
and place in the data
directory at the root level of the project.
If there is no data
directory, create one.
The name of the file should be pull-secret
.
If you intend to test non-GA versions of the local storage operator you will need access
to provided authentication for brew.registry.redhat.io
in your pull secret. In order
to obtain credentials for this brew registry you will need to do the following:
-
Send a request to
[email protected]
forbrew.registry.redhat.io
credentials for you or your team. You will want to provide an email address to associate the account with. You may be required to provide a gpg public key in order to receive the credentials. -
Once you have received credentials for the brew registry you will need to log in to the registry in order to obtain the data you will add to your pull secret. You can log in to the registry using
docker
and the provided credentials:docker login brew.registry.redhat.io
-
Once you have successfully logged in, you can retrieve the auth data from
~/.docker/config.json
. Grab the auth section forbrew.registry.redhat.com
, it will look something like this"brew.registry.redhat.io" : { "auth" : "TOKEN" },
-
Add that auth section to your existing pull secret.
In addition you will need to add a registry auth to your pull-secret to support deploying CI / Nightly builds. Please follow the instructions here to do so.
To support pulling images from the new private repositories in quay, you will need to add yet another registry auth to the auths section of your pull-secret. Ask people on ocs-qe mailing list or chat room if you don't know where to find the TOKEN.
{"quay.io/rhceph-dev": { "auth": "TOKEN"}}
We would like to use a shared ssh key with engineering which allows us to connect to the nodes via known ssh key for QE and engineering. To setup the shared public ssh key for your deployment you have to follow these steps:
Download private openshift-dev ssh key from secret location to
~/.ssh/openshift-dev.pem
.
chmod 600 ~/.ssh/openshift-dev.pem
ssh-keygen -y -f ~/.ssh/openshift-dev.pem > ~/.ssh/openshift-dev.pub
Ask people on ocs-qe mailing list or chat room if you don't know where to find the
secret URL for openshift-dev key. Or look for this mail thread:
Libra ssh key replaced by openshift-dev key
where the URL was mentioned.
If you would like to use a different path, you can overwrite it in the custom
config file under the DEPLOYMENT section with this key and value:
ssh_key: "~/your/custom/path/ssh-key.pub"
.
If you don't want to use the shared key, you can change this value to
~/.ssh/id_rsa.pub
to use your own public key.
If the public key does not exist, the deployment of this public key is skipped.
How to connect to the node via SSH you can find here.
For some services we will require additional information in order to successfully authenticate. This is a simple yaml file that you will need to create manually.
Create a file under ocs-ci/data/
named auth.yaml
.
To authenticate with quay you will need to have an access token. You can generate one yourself by following the API doc or you may use the one QE has generated already. Ask people on ocs-qe mailing list or chat room if you don't know where to find the access token.
To enable ocs-ci to use this token, add the following to your auth.yaml
:
quay:
access_token: 'YOUR_TOKEN'
For disconnected cluster installation, we need to access github api (during downloading opm tool) which have very strict rate limit for unauthenticated requests (60 requests per hour). To avoid API rate limit exceeded errors, you can provide github authentication credentials (username and token) obtained on Personal access tokens page (Settings -> Developer settings -> Personal access tokens).
github:
username: "GITHUB_USERNAME"
token: "GITHUB_TOKEN"
AWS and CentralCI Authentication files will reside in users home dir and will be used by CLI option
Cluster configuration that defines Openshift/Kubernetes Cluster along with Ceph Configuration will reside in conf/ folder, This is still a work in progress.
To send test run reports to email ID's, postfix should be installed on fedora
* `sudo dnf install postfix`
* `systemctl enable postfix.service`
* `systemctl start postfix.service`