You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a very simple script that uses Kopf.
It is triggered when a Workflow is being created. This handler, in turn, generates a "bmc.tinkerbell.org/v1alpha1/Job" object in Kubernetes (without adopting it).
Now, I want to create a second handler in the same file that handles the update of the said Job:
Surprisingly, only the first handler (create_fn) is ever being called. The handler called on_bmc_job_update_fn is actually never called.
When looking at the kopf logs, I can see that only one handler is registered:
/usr/local/lib/python3.12/site-packages/kopf/_core/reactor/running.py:179: FutureWarning: Absence of either namespaces or cluster-wide flag will become an error soon. For now, switching to warnings.warn("Absence of either namespaces or cluster-wide flag will become an error soon."
[2024-11-21 13:53:12,606] kopf._core.reactor.r [DEBUG ] Starting Kopf 1.37.2.
[2024-11-21 13:53:12,606] kopf._core.engines.a [INFO ] Initial authentication has been initiated.
[2024-11-21 13:53:12,606] kopf.activities.auth [DEBUG ] Activity 'login_via_client' is invoked.
[2024-11-21 13:53:12,608] kopf.activities.auth [DEBUG ] Client is configured in cluster with service account.
[2024-11-21 13:53:12,609] kopf.activities.auth [INFO ] Activity 'login_via_client' succeeded.
[2024-11-21 13:53:12,609] kopf._core.engines.a [INFO ] Initial authentication has finished.
[2024-11-21 13:53:12,724] kopf._cogs.clients.w [DEBUG ] Starting the watch-stream for customresourcedefinitions.v1.apiextensions.k8s.io cluster-wide.
[2024-11-21 13:53:12,725] kopf._cogs.clients.w [DEBUG ] Starting the watch-stream for workflows.v1alpha1.tinkerbell.org cluster-wide.
[2024-11-21 13:53:12,725] kopf._cogs.clients.w [DEBUG ] Starting the watch-stream for jobs.v1alpha1.bmc.tinkerbell.org cluster-wide.
[2024-11-21 13:53:51,323] kopf.objects [DEBUG ] [tink/icadmin012] Creation is in progress: {'apiVersion': 'tinkerbell.org/v1alpha1', 'kind': 'Workflow', 'metadata': {'annotations'
[2024-11-21 13:53:51,323] kopf.objects [DEBUG ] [tink/icadmin012] Handler 'create_fn' is invoked.
[2024-11-21 13:53:51,324] kopf.objects [INFO ] [tink/icadmin012] Creating boot-to-PXE object for hardware: icadmin012
[2024-11-21 13:53:51,340] kubernetes.client.re [DEBUG ] response body: {"apiVersion":"bmc.tinkerbell.org/v1alpha1","kind":"Job","metadata":{"creationTimestamp":"2024-11-21T13:53:51Z","gen
[2024-11-21 13:53:51,340] kopf.objects [DEBUG ] [tink/icadmin012] Boot to PXE child is created: {'apiVersion': 'bmc.tinkerbell.org/v1alpha1', 'kind': 'Job', 'metadata': {'creation
[2024-11-21 13:53:51,341] kopf.objects [INFO ] [tink/icadmin012] object_name: {'boot-to-pxe-name': 'reboot-icadmin012-to-pxe'}
[2024-11-21 13:53:51,343] kopf.objects [INFO ] [tink/icadmin012] Handler 'create_fn' succeeded.
[2024-11-21 13:53:51,343] kopf.objects [INFO ] [tink/icadmin012] Creation is processed: 1 succeeded; 0 failed.
[2024-11-21 13:53:51,344] kopf.objects [DEBUG ] [tink/icadmin012] Patching with: {'status': {'create_fn': {'boot-to-pxe-name': 'reboot-icadmin012-to-pxe'}}, 'metadata': {'annotati
[2024-11-21 13:53:51,368] kopf.objects [WARNING ] [tink/icadmin012] Patching failed with inconsistencies: (('remove', ('status', 'create_fn'), {'boot-to-pxe-name': 'reboot-icadmin01
[2024-11-21 13:53:51,461] kopf.objects [DEBUG ] [tink/reboot-icadmin012-to-pxe] Creation is in progress: {'apiVersion': 'bmc.tinkerbell.org/v1alpha1', 'kind': 'Job', 'metadata': {
[2024-11-21 13:53:51,461] kopf.objects [DEBUG ] [tink/reboot-icadmin012-to-pxe] Patching with: {'metadata': {'annotations': {'kopf.zalando.org/last-handled-configuration': '{"spec
[2024-11-21 13:53:51,469] kopf.objects [DEBUG ] [tink/icadmin012] Something has changed, but we are not interested (the essence is the same).
[2024-11-21 13:53:51,469] kopf.objects [DEBUG ] [tink/icadmin012] Handling cycle is finished, waiting for new changes.
Is there any reason the second handler is never called ?
The text was updated successfully, but these errors were encountered:
I have the exact same issue but with batch/v1/jobs.
I want to be notified when a job is completed in a @kopf.on.update handler but it never triggers.
Having Something has changed, but we are not interested (the essence is the same).
@julb I don't know if it will help you, but I worked around the problem by using a timer.
Instead of catching the update event, I'm checking the object every n seconds to check if the new value I'm expecting is here. If not, I simply do nothing.
It's a bit old school, but it does the trick.
Hey @jaepetto
Thanks for your feedback.
On my side, since I was interested in status change, I did: @kopf.on.update("batch", "v1", "jobs", field="status")
Now, the handler is triggerred correctly. May this help.
Keywords
handler
Problem
I have a very simple script that uses Kopf.
It is triggered when a Workflow is being created. This handler, in turn, generates a "bmc.tinkerbell.org/v1alpha1/Job" object in Kubernetes (without adopting it).
Now, I want to create a second handler in the same file that handles the update of the said Job:
Surprisingly, only the first handler (
create_fn
) is ever being called. The handler calledon_bmc_job_update_fn
is actually never called.When looking at the kopf logs, I can see that only one handler is registered:
Is there any reason the second handler is never called ?
The text was updated successfully, but these errors were encountered: