Skip to content

Commit

Permalink
Merge branch 'master' into hosted_agent_infra
Browse files Browse the repository at this point in the history
  • Loading branch information
jilju authored Jun 14, 2024
2 parents f3eaaaa + eb9a669 commit 154dcb6
Show file tree
Hide file tree
Showing 72 changed files with 3,682 additions and 346 deletions.
22 changes: 22 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,25 @@
.venv/
venv/
data/
*~
.*.sw[nmop]
*.pyc
.tox
__pycache__
rerun
ven*
*egg*
.idea
*.iml
config.yaml
.vscode
*.bak
openshift-install*
bin/
dist
.coverage
data/*
external/
apidoc/*
_build/*
bugzilla.cfg
9 changes: 4 additions & 5 deletions Docker_files/ocsci_container/Containerfile.ci
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
FROM registry.access.redhat.com/ubi8/ubi:latest as BUILDER
FROM registry.access.redhat.com/ubi9/ubi:latest as BUILDER

ENV OCS_CI_DIR=/opt/ocs-ci
WORKDIR "${OCS_CI_DIR}"

RUN dnf install --nodocs -y python38 python38-devel git libcurl-devel gcc openssl-devel libxml2-devel
RUN dnf install --nodocs -y python39 python3.9-devel git libcurl-devel gcc openssl-devel libxml2-devel

# Copy the entire source tree to the image
# TODO: Check to see if there are any more files we can drop from the image
Expand All @@ -16,13 +16,12 @@ RUN pushd "${OCS_CI_DIR}" \
&& python3 -m venv venv \
&& source venv/bin/activate \
&& pip3 install --upgrade pip \
&& pip3 install setuptools==65.5.0 \
&& pip3 install -r requirements.txt \
&& rm -rf .git

### Runner stage

FROM registry.access.redhat.com/ubi8/ubi:latest as RUNNER
FROM registry.access.redhat.com/ubi9/ubi:latest as RUNNER

ENV OCS_CI_DIR="/opt/ocs-ci" \
VIRTUAL_ENV="/opt/ocs-ci/venv" \
Expand All @@ -37,7 +36,7 @@ RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/s

RUN curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz | tar -C /usr/local/bin -zxvf - oc

RUN dnf install -y --nodocs python38 git jq rsync make \
RUN dnf install -y --nodocs python39 git jq rsync make \
&& dnf clean all \
&& rm -rf /var/cache/yum /var/cache/dnf /var/lib/dnf/repos /var/log/dnf.librepo.log /var/log/dnf.log /var/log/dnf.rpm.log /var/log/hawkey.log /var/cache/ldconfig \
&& curl -sL https://github.com/mikefarah/yq/releases/download/v4.32.2/yq_linux_amd64.tar.gz | tar -C /usr/local/bin -zxvf - ./yq_linux_amd64 \
Expand Down
7 changes: 7 additions & 0 deletions conf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -301,6 +301,10 @@ higher priority).
* `private_gw` - GW for the private interface
* `root_disk_id` - ID of the root disk
* `root_disk_sn` - Serial number of the root disk
* `node_network_configuration_policy_name` - The NodeNetworkConfigurationPolicy CR name
* `node_network_configuration_policy_ip` - The ip address of NodeNetworkConfigurationPolicy CR
* `node_network_configuration_policy_prefix_length` - The subnetmask of NodeNetworkConfigurationPolicy CR
* `node_network_configuration_policy_destination_route` - The destination route of NodeNetworkConfigurationPolicy CR
* `hcp_version` - version of HCP client to be deployed on machine running the tests
* `metallb_version` - MetalLB operator version to install
* `install_hypershift_upstream` - Install hypershift from upstream or not (Default: false). Necessary for unreleased OCP/CNV versions
Expand All @@ -314,6 +318,9 @@ higher priority).
* `hosted_odf_registry` - registry for hosted ODF
* `hosted_odf_version` - version of ODF to be deployed on hosted clusters
* `wait_timeout_for_healthy_osd_in_minutes` - timeout waiting for healthy OSDs before continuing upgrade (see https://bugzilla.redhat.com/show_bug.cgi?id=2276694 for more details)
* `odf_provider_mode_deployment` - True if you would like to enable provider mode deployment.
* `client_subcription_image` - ODF subscription image details for the storageclients.
* `channel_to_client_subscription` - Channel value for the odf subscription image for storageclients.

#### UPGRADE

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,7 @@ ENV_DATA:
encryption_at_rest: true
sc_encryption: true
KMS_PROVIDER: azure-kv
REPORTING:
polarion:
deployment_id: 'OCS-5798'

2 changes: 1 addition & 1 deletion conf/deployment/vsphere/ai_1az_rhcos_vsan_3m_3w.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,5 @@ ENV_DATA:
worker_num_cpus: '16'
master_memory: '16384'
compute_memory: '65536'
extra_disks: 2
extra_disks: 4
fio_storageutilization_min_mbps: 10.0
6 changes: 6 additions & 0 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,12 @@ to the pytest.
* `--install-lvmo` - Deploy LVMCluster, will skip ODF deployment.
* `--lvmo-disks` - Number of disks to add to SNO deployment.
* `--lvmo-disks-size` - Size of disks to add to SNO deployment.
* `--disable-environment-checker` - Disable the leftover checks in existing flow.
* `--resource-checker` - This will identify the leftover which was created by test cases. This is
similar to environment-checker, only difference is resource-checker will track the resources
created during test case run whereas environment-checker will track all resources in
cluster irrespective of who created.
* `--kubeconfig` - Location of kubeconfig.

## Examples

Expand Down
28 changes: 19 additions & 9 deletions ocs_ci/deployment/baremetal.py
Original file line number Diff line number Diff line change
Expand Up @@ -1251,7 +1251,7 @@ def destroy_cluster(self, log_level="DEBUG"):


@retry(exceptions.CommandFailed, tries=10, delay=30, backoff=1)
def clean_disk(worker):
def clean_disk(worker, namespace=constants.DEFAULT_NAMESPACE):
"""
Perform disk cleanup
Expand All @@ -1262,41 +1262,51 @@ def clean_disk(worker):
ocp_obj = ocp.OCP()
cmd = """lsblk --all --noheadings --output "KNAME,PKNAME,TYPE,MOUNTPOINT" --json"""
out = ocp_obj.exec_oc_debug_cmd(
node=worker.name, cmd_list=[cmd], namespace=constants.BM_DEBUG_NODE_NS
node=worker.name, cmd_list=[cmd], namespace=namespace
)
disk_to_ignore_cleanup_raw = json.loads(str(out))
disk_to_ignore_cleanup_json = disk_to_ignore_cleanup_raw["blockdevices"]
selected_disks_to_ignore_cleanup = []
for disk_to_ignore_cleanup in disk_to_ignore_cleanup_json:
if disk_to_ignore_cleanup["mountpoint"] == "/boot":
logger.info(
f"Ignorning disk {disk_to_ignore_cleanup['pkname']} for cleanup because it's a root disk "
)
selected_disk_to_ignore_cleanup = disk_to_ignore_cleanup["pkname"]
# Adding break when root disk is found
break
selected_disks_to_ignore_cleanup.append(
str(disk_to_ignore_cleanup["pkname"])
)
elif disk_to_ignore_cleanup["type"] == "rom":
logger.info(
f"Ignorning disk {disk_to_ignore_cleanup['kname']} for cleanup because it's a rom disk "
)
selected_disks_to_ignore_cleanup.append(
str(disk_to_ignore_cleanup["kname"])
)

out = ocp_obj.exec_oc_debug_cmd(
node=worker.name,
cmd_list=["lsblk -nd -e252,7 --output NAME --json"],
namespace=constants.BM_DEBUG_NODE_NS,
namespace=namespace,
)
lsblk_output = json.loads(str(out))
lsblk_devices = lsblk_output["blockdevices"]

for lsblk_device in lsblk_devices:
if lsblk_device["name"] == str(selected_disk_to_ignore_cleanup):
if lsblk_device["name"] in selected_disks_to_ignore_cleanup:
logger.info(f'the disk cleanup is ignored for, {lsblk_device["name"]}')
pass
else:
logger.info(f"Cleaning up {lsblk_device['name']}")
out = ocp_obj.exec_oc_debug_cmd(
node=worker.name,
cmd_list=[f"wipefs -a -f /dev/{lsblk_device['name']}"],
namespace=constants.BM_DEBUG_NODE_NS,
namespace=namespace,
)
logger.info(out)
out = ocp_obj.exec_oc_debug_cmd(
node=worker.name,
cmd_list=[f"sgdisk --zap-all /dev/{lsblk_device['name']}"],
namespace=constants.BM_DEBUG_NODE_NS,
namespace=namespace,
)
logger.info(out)

Expand Down
Loading

0 comments on commit 154dcb6

Please sign in to comment.