Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates to provider-kubeconfig.py to generate consumer kubeconfig #1376

Merged
merged 190 commits into from
Nov 21, 2024

Conversation

devdattakulkarni
Copy link
Contributor

No description provided.

Also, modified provider-kubeconfig generator to include https
in the provided API server IP only if it is not already present.
In the original design, we were using the output of kubectl connections
to find out the Pods and then running metrics and logs on those Pods.
However, for workloads in which Pods get created at runtime, connections
is not able to find all the newly created workload Pods. This leads to
incomplete results for metrics and logs. A simple way to handle this is
to use all the pods in the namespace in which the application pods are running
for collecting logs and metrics. The runtime Pods are typically
created in the same namespace in which the application is running.
This will give accurate results (atleast more accurate than using the output
of kubect connections for the list of Pods).

Fixes: #1190
Added a field ('error') to the status object.
This is used by helmer to store any errors that
are encountered when performing helm upgrade.
It provides a UI for managing application instances.
But it does not run on the cluster.
The open-source equivalent is consumerUI, which is
deployed as part KubePlus deployment.
- Added link to Contributing guidelines. These guidelines
  contain pointer to setting up development environment
- Extracted Architecture in a separate section
- Updated link to CNCF Application definition section
- Added Getting Started section

Fixes: #1221
Licensing support
Correctly updating the status field of the CR instance upon app update
Moved KubePlus CRDs (ResourceComposition, etc.) in crds folder inside the chart. This ensures
that the CRDs don't get deleted when KubePlus is upgraded. This further ensures that
the application CRDs and application instances are not deleted when KubePlus is upgraded.

Fixes: #1338
- GH action runner on Kind seems to be having some issues. KubePlus
  Pod is not starting up.
- Hence, re-added GH action on Minikube
In local testing on Minikube, one of the application Pods
created as part of this test gets stuck in Pending state
because of the need for more memory. It is possible that
in the CI run in GH action this is happening as well. So skipping
this test for now. We can turn it back on once we know exactly
how much memory to use when creating local cluster for this
test to pass.

Also, in the spot check test with WordPress, okaying the test
even if application Pods are in "Pending" state. Purpose of the spot
check is to quickly verify the core functionality. For this, it is
okay if application Pods are in Pending state (and don't progress to
Running state). All we are interested in the Pods getting created,
not whether they transition to the Running state.
- Deprecating Pod-level policy enforcement
  Fixes: #1367
  Fixes: #1366

- Pinning TLS version to 1.2
  Fixes: #1365
- Enabling run on minikube on pull_request
@devdattakulkarni devdattakulkarni merged commit 7b2cadb into master Nov 21, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant