2026-01-01 00:00:07.037442 | Job console starting 2026-01-01 00:00:07.086008 | Updating git repos 2026-01-01 00:00:07.605326 | Cloning repos into workspace 2026-01-01 00:00:07.855496 | Restoring repo states 2026-01-01 00:00:07.914820 | Merging changes 2026-01-01 00:00:07.914919 | Checking out repos 2026-01-01 00:00:08.291575 | Preparing playbooks 2026-01-01 00:00:09.654515 | Running Ansible setup 2026-01-01 00:00:19.569482 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-01 00:00:23.869607 | 2026-01-01 00:00:23.869786 | PLAY [Base pre] 2026-01-01 00:00:24.041140 | 2026-01-01 00:00:24.041326 | TASK [Setup log path fact] 2026-01-01 00:00:24.117752 | orchestrator | ok 2026-01-01 00:00:24.242453 | 2026-01-01 00:00:24.242634 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-01 00:00:24.416676 | orchestrator | ok 2026-01-01 00:00:24.582159 | 2026-01-01 00:00:24.582320 | TASK [emit-job-header : Print job information] 2026-01-01 00:00:24.774652 | # Job Information 2026-01-01 00:00:24.774988 | Ansible Version: 2.16.14 2026-01-01 00:00:24.775066 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-01 00:00:24.775112 | Pipeline: periodic-midnight 2026-01-01 00:00:24.775137 | Executor: 521e9411259a 2026-01-01 00:00:24.775159 | Triggered by: https://github.com/osism/testbed 2026-01-01 00:00:24.775181 | Event ID: d1d26fc04bfb4173814cf17786ddfb96 2026-01-01 00:00:24.808528 | 2026-01-01 00:00:24.808698 | LOOP [emit-job-header : Print node information] 2026-01-01 00:00:25.385155 | orchestrator | ok: 2026-01-01 00:00:25.385492 | orchestrator | # Node Information 2026-01-01 00:00:25.385544 | orchestrator | Inventory Hostname: orchestrator 2026-01-01 00:00:25.385569 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-01 00:00:25.385592 | orchestrator | Username: zuul-testbed02 2026-01-01 00:00:25.385650 | orchestrator | Distro: Debian 12.12 2026-01-01 00:00:25.385676 | orchestrator | Provider: static-testbed 2026-01-01 00:00:25.385698 | orchestrator | Region: 2026-01-01 00:00:25.385719 | orchestrator | Label: testbed-orchestrator 2026-01-01 00:00:25.385739 | orchestrator | Product Name: OpenStack Nova 2026-01-01 00:00:25.385810 | orchestrator | Interface IP: 81.163.193.140 2026-01-01 00:00:25.408604 | 2026-01-01 00:00:25.408751 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-01 00:00:27.190935 | orchestrator -> localhost | changed 2026-01-01 00:00:27.199532 | 2026-01-01 00:00:27.199670 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-01 00:00:30.885428 | orchestrator -> localhost | changed 2026-01-01 00:00:30.919534 | 2026-01-01 00:00:30.920278 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-01 00:00:31.903865 | orchestrator -> localhost | ok 2026-01-01 00:00:31.915714 | 2026-01-01 00:00:31.915854 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-01 00:00:31.959595 | orchestrator | ok 2026-01-01 00:00:32.010382 | orchestrator | included: /var/lib/zuul/builds/f24727ee07344a4ebbbc3849d4298a4b/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-01 00:00:32.038675 | 2026-01-01 00:00:32.038816 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-01 00:00:36.135733 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-01 00:00:36.135975 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/f24727ee07344a4ebbbc3849d4298a4b/work/f24727ee07344a4ebbbc3849d4298a4b_id_rsa 2026-01-01 00:00:36.136015 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/f24727ee07344a4ebbbc3849d4298a4b/work/f24727ee07344a4ebbbc3849d4298a4b_id_rsa.pub 2026-01-01 00:00:36.136061 | orchestrator -> localhost | The key fingerprint is: 2026-01-01 00:00:36.136090 | orchestrator -> localhost | SHA256:qabAZPEKV4Vgsp7KV/GYgyuIN/1r7Q0XB86OQWTDyGo zuul-build-sshkey 2026-01-01 00:00:36.136114 | orchestrator -> localhost | The key's randomart image is: 2026-01-01 00:00:36.136153 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-01 00:00:36.136176 | orchestrator -> localhost | |. o. o.o+ | 2026-01-01 00:00:36.136198 | orchestrator -> localhost | | + ..oo.. | 2026-01-01 00:00:36.136219 | orchestrator -> localhost | |. . .o . . | 2026-01-01 00:00:36.136238 | orchestrator -> localhost | |. .+E =. + . | 2026-01-01 00:00:36.136258 | orchestrator -> localhost | |.o+o.= .S + . | 2026-01-01 00:00:36.136291 | orchestrator -> localhost | |=* oo .. + o | 2026-01-01 00:00:36.136314 | orchestrator -> localhost | |+o*o. o.o o | 2026-01-01 00:00:36.136336 | orchestrator -> localhost | | .oo +. .+ | 2026-01-01 00:00:36.136359 | orchestrator -> localhost | | ..oo. . | 2026-01-01 00:00:36.136379 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-01 00:00:36.136435 | orchestrator -> localhost | ok: Runtime: 0:00:01.536272 2026-01-01 00:00:36.144600 | 2026-01-01 00:00:36.144750 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-01 00:00:36.232170 | orchestrator | ok 2026-01-01 00:00:36.282965 | orchestrator | included: /var/lib/zuul/builds/f24727ee07344a4ebbbc3849d4298a4b/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-01 00:00:36.336895 | 2026-01-01 00:00:36.337090 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-01 00:00:36.417284 | orchestrator | skipping: Conditional result was False 2026-01-01 00:00:36.429313 | 2026-01-01 00:00:36.429478 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-01 00:00:37.791066 | orchestrator | changed 2026-01-01 00:00:37.799774 | 2026-01-01 00:00:37.799917 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-01 00:00:38.176653 | orchestrator | ok 2026-01-01 00:00:38.191873 | 2026-01-01 00:00:38.192064 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-01 00:00:38.793996 | orchestrator | ok 2026-01-01 00:00:38.815988 | 2026-01-01 00:00:38.816164 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-01 00:00:39.526824 | orchestrator | ok 2026-01-01 00:00:39.541357 | 2026-01-01 00:00:39.541508 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-01 00:00:39.620516 | orchestrator | skipping: Conditional result was False 2026-01-01 00:00:39.629651 | 2026-01-01 00:00:39.629811 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-01 00:00:41.543521 | orchestrator -> localhost | changed 2026-01-01 00:00:41.562072 | 2026-01-01 00:00:41.562284 | TASK [add-build-sshkey : Add back temp key] 2026-01-01 00:00:42.871983 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/f24727ee07344a4ebbbc3849d4298a4b/work/f24727ee07344a4ebbbc3849d4298a4b_id_rsa (zuul-build-sshkey) 2026-01-01 00:00:42.872283 | orchestrator -> localhost | ok: Runtime: 0:00:00.114541 2026-01-01 00:00:42.881246 | 2026-01-01 00:00:42.881384 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-01 00:00:44.605935 | orchestrator | ok 2026-01-01 00:00:44.640252 | 2026-01-01 00:00:44.641872 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-01 00:00:44.794285 | orchestrator | skipping: Conditional result was False 2026-01-01 00:00:45.087763 | 2026-01-01 00:00:45.087916 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-01 00:00:46.009568 | orchestrator | ok 2026-01-01 00:00:46.065218 | 2026-01-01 00:00:46.065424 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-01 00:00:46.182394 | orchestrator | ok 2026-01-01 00:00:46.231673 | 2026-01-01 00:00:46.231865 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-01 00:00:48.172520 | orchestrator -> localhost | ok 2026-01-01 00:00:48.201566 | 2026-01-01 00:00:48.201713 | TASK [validate-host : Collect information about the host] 2026-01-01 00:00:50.627654 | orchestrator | ok 2026-01-01 00:00:50.724818 | 2026-01-01 00:00:50.724988 | TASK [validate-host : Sanitize hostname] 2026-01-01 00:00:51.051611 | orchestrator | ok 2026-01-01 00:00:51.079148 | 2026-01-01 00:00:51.079306 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-01 00:00:55.908419 | orchestrator -> localhost | changed 2026-01-01 00:00:55.925137 | 2026-01-01 00:00:55.925276 | TASK [validate-host : Collect information about zuul worker] 2026-01-01 00:00:57.252556 | orchestrator | ok 2026-01-01 00:00:57.283377 | 2026-01-01 00:00:57.283535 | TASK [validate-host : Write out all zuul information for each host] 2026-01-01 00:01:01.157909 | orchestrator -> localhost | changed 2026-01-01 00:01:01.194102 | 2026-01-01 00:01:01.194516 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-01 00:01:01.618775 | orchestrator | ok 2026-01-01 00:01:01.637693 | 2026-01-01 00:01:01.648118 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-01 00:02:21.421995 | orchestrator | changed: 2026-01-01 00:02:21.422248 | orchestrator | .d..t...... src/ 2026-01-01 00:02:21.422284 | orchestrator | .d..t...... src/github.com/ 2026-01-01 00:02:21.422310 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-01 00:02:21.422332 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-01 00:02:21.422353 | orchestrator | RedHat.yml 2026-01-01 00:02:21.440615 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-01 00:02:21.440635 | orchestrator | RedHat.yml 2026-01-01 00:02:21.440695 | orchestrator | = 1.53.0"... 2026-01-01 00:02:31.925073 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-01 00:02:32.062159 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-01 00:02:32.551592 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-01 00:02:32.614600 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-01 00:02:33.342092 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-01 00:02:33.403604 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-01 00:02:33.913784 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-01 00:02:33.913847 | orchestrator | 2026-01-01 00:02:33.913855 | orchestrator | Providers are signed by their developers. 2026-01-01 00:02:33.913860 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-01 00:02:33.913865 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-01 00:02:33.914710 | orchestrator | 2026-01-01 00:02:33.914727 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-01 00:02:33.914733 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-01 00:02:33.914748 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-01 00:02:33.914752 | orchestrator | you run "tofu init" in the future. 2026-01-01 00:02:33.915142 | orchestrator | 2026-01-01 00:02:33.915169 | orchestrator | OpenTofu has been successfully initialized! 2026-01-01 00:02:33.915176 | orchestrator | 2026-01-01 00:02:33.915180 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-01 00:02:33.915184 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-01 00:02:33.915189 | orchestrator | should now work. 2026-01-01 00:02:33.915193 | orchestrator | 2026-01-01 00:02:33.915197 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-01 00:02:33.915201 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-01 00:02:33.915206 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-01 00:02:34.070979 | orchestrator | Created and switched to workspace "ci"! 2026-01-01 00:02:34.071054 | orchestrator | 2026-01-01 00:02:34.071063 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-01 00:02:34.071070 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-01 00:02:34.071076 | orchestrator | for this configuration. 2026-01-01 00:02:34.175411 | orchestrator | ci.auto.tfvars 2026-01-01 00:02:35.012404 | orchestrator | default_custom.tf 2026-01-01 00:02:37.055918 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-01 00:02:37.588370 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-01 00:02:37.886614 | orchestrator | 2026-01-01 00:02:37.887418 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-01 00:02:37.887651 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-01 00:02:37.887663 | orchestrator | + create 2026-01-01 00:02:37.887693 | orchestrator | <= read (data resources) 2026-01-01 00:02:37.887700 | orchestrator | 2026-01-01 00:02:37.887705 | orchestrator | OpenTofu will perform the following actions: 2026-01-01 00:02:37.887811 | orchestrator | 2026-01-01 00:02:37.887818 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-01 00:02:37.887840 | orchestrator | # (config refers to values not yet known) 2026-01-01 00:02:37.887846 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-01 00:02:37.887853 | orchestrator | + checksum = (known after apply) 2026-01-01 00:02:37.887859 | orchestrator | + created_at = (known after apply) 2026-01-01 00:02:37.887864 | orchestrator | + file = (known after apply) 2026-01-01 00:02:37.887869 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.888068 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.888074 | orchestrator | + min_disk_gb = (known after apply) 2026-01-01 00:02:37.888079 | orchestrator | + min_ram_mb = (known after apply) 2026-01-01 00:02:37.888085 | orchestrator | + most_recent = true 2026-01-01 00:02:37.888090 | orchestrator | + name = (known after apply) 2026-01-01 00:02:37.888095 | orchestrator | + protected = (known after apply) 2026-01-01 00:02:37.888100 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.888368 | orchestrator | + schema = (known after apply) 2026-01-01 00:02:37.888377 | orchestrator | + size_bytes = (known after apply) 2026-01-01 00:02:37.888383 | orchestrator | + tags = (known after apply) 2026-01-01 00:02:37.888388 | orchestrator | + updated_at = (known after apply) 2026-01-01 00:02:37.888394 | orchestrator | } 2026-01-01 00:02:37.888400 | orchestrator | 2026-01-01 00:02:37.888405 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-01 00:02:37.888412 | orchestrator | # (config refers to values not yet known) 2026-01-01 00:02:37.888419 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-01 00:02:37.888628 | orchestrator | + checksum = (known after apply) 2026-01-01 00:02:37.888634 | orchestrator | + created_at = (known after apply) 2026-01-01 00:02:37.888639 | orchestrator | + file = (known after apply) 2026-01-01 00:02:37.888644 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.888649 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.888654 | orchestrator | + min_disk_gb = (known after apply) 2026-01-01 00:02:37.888658 | orchestrator | + min_ram_mb = (known after apply) 2026-01-01 00:02:37.888664 | orchestrator | + most_recent = true 2026-01-01 00:02:37.888671 | orchestrator | + name = (known after apply) 2026-01-01 00:02:37.888677 | orchestrator | + protected = (known after apply) 2026-01-01 00:02:37.888862 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.888867 | orchestrator | + schema = (known after apply) 2026-01-01 00:02:37.888872 | orchestrator | + size_bytes = (known after apply) 2026-01-01 00:02:37.888877 | orchestrator | + tags = (known after apply) 2026-01-01 00:02:37.888882 | orchestrator | + updated_at = (known after apply) 2026-01-01 00:02:37.888888 | orchestrator | } 2026-01-01 00:02:37.888893 | orchestrator | 2026-01-01 00:02:37.888898 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-01 00:02:37.888903 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-01 00:02:37.888908 | orchestrator | + content = (known after apply) 2026-01-01 00:02:37.888913 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-01 00:02:37.889132 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-01 00:02:37.889139 | orchestrator | + content_md5 = (known after apply) 2026-01-01 00:02:37.889144 | orchestrator | + content_sha1 = (known after apply) 2026-01-01 00:02:37.889158 | orchestrator | + content_sha256 = (known after apply) 2026-01-01 00:02:37.889164 | orchestrator | + content_sha512 = (known after apply) 2026-01-01 00:02:37.889169 | orchestrator | + directory_permission = "0777" 2026-01-01 00:02:37.889174 | orchestrator | + file_permission = "0644" 2026-01-01 00:02:37.889180 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-01 00:02:37.889307 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.889313 | orchestrator | } 2026-01-01 00:02:37.889318 | orchestrator | 2026-01-01 00:02:37.889323 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-01 00:02:37.889328 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-01 00:02:37.889334 | orchestrator | + content = (known after apply) 2026-01-01 00:02:37.889338 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-01 00:02:37.889343 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-01 00:02:37.889348 | orchestrator | + content_md5 = (known after apply) 2026-01-01 00:02:37.889353 | orchestrator | + content_sha1 = (known after apply) 2026-01-01 00:02:37.889358 | orchestrator | + content_sha256 = (known after apply) 2026-01-01 00:02:37.889363 | orchestrator | + content_sha512 = (known after apply) 2026-01-01 00:02:37.889368 | orchestrator | + directory_permission = "0777" 2026-01-01 00:02:37.889373 | orchestrator | + file_permission = "0644" 2026-01-01 00:02:37.889388 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-01 00:02:37.889393 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.889398 | orchestrator | } 2026-01-01 00:02:37.889403 | orchestrator | 2026-01-01 00:02:37.889413 | orchestrator | # local_file.inventory will be created 2026-01-01 00:02:37.889418 | orchestrator | + resource "local_file" "inventory" { 2026-01-01 00:02:37.890295 | orchestrator | + content = (known after apply) 2026-01-01 00:02:37.890316 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-01 00:02:37.890320 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-01 00:02:37.890325 | orchestrator | + content_md5 = (known after apply) 2026-01-01 00:02:37.890330 | orchestrator | + content_sha1 = (known after apply) 2026-01-01 00:02:37.890337 | orchestrator | + content_sha256 = (known after apply) 2026-01-01 00:02:37.890342 | orchestrator | + content_sha512 = (known after apply) 2026-01-01 00:02:37.890347 | orchestrator | + directory_permission = "0777" 2026-01-01 00:02:37.890352 | orchestrator | + file_permission = "0644" 2026-01-01 00:02:37.890356 | orchestrator | + filename = "inventory.ci" 2026-01-01 00:02:37.890361 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.890366 | orchestrator | } 2026-01-01 00:02:37.890370 | orchestrator | 2026-01-01 00:02:37.890375 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-01 00:02:37.890380 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-01 00:02:37.890385 | orchestrator | + content = (sensitive value) 2026-01-01 00:02:37.890390 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-01 00:02:37.890394 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-01 00:02:37.890399 | orchestrator | + content_md5 = (known after apply) 2026-01-01 00:02:37.890404 | orchestrator | + content_sha1 = (known after apply) 2026-01-01 00:02:37.890408 | orchestrator | + content_sha256 = (known after apply) 2026-01-01 00:02:37.893053 | orchestrator | + content_sha512 = (known after apply) 2026-01-01 00:02:37.893093 | orchestrator | + directory_permission = "0700" 2026-01-01 00:02:37.893100 | orchestrator | + file_permission = "0600" 2026-01-01 00:02:37.893105 | orchestrator | + filename = ".id_rsa.ci" 2026-01-01 00:02:37.893109 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893113 | orchestrator | } 2026-01-01 00:02:37.893118 | orchestrator | 2026-01-01 00:02:37.893123 | orchestrator | # null_resource.node_semaphore will be created 2026-01-01 00:02:37.893127 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-01 00:02:37.893131 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893136 | orchestrator | } 2026-01-01 00:02:37.893140 | orchestrator | 2026-01-01 00:02:37.893145 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-01 00:02:37.893150 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-01 00:02:37.893154 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893159 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893163 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893167 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.893171 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893176 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-01 00:02:37.893182 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893187 | orchestrator | + size = 80 2026-01-01 00:02:37.893192 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893198 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893203 | orchestrator | } 2026-01-01 00:02:37.893208 | orchestrator | 2026-01-01 00:02:37.893212 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-01 00:02:37.893217 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:37.893277 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893283 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893288 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893303 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.893308 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893312 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-01 00:02:37.893316 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893321 | orchestrator | + size = 80 2026-01-01 00:02:37.893325 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893329 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893333 | orchestrator | } 2026-01-01 00:02:37.893337 | orchestrator | 2026-01-01 00:02:37.893342 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-01 00:02:37.893346 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:37.893350 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893354 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893359 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893363 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.893367 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893371 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-01 00:02:37.893375 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893379 | orchestrator | + size = 80 2026-01-01 00:02:37.893383 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893388 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893392 | orchestrator | } 2026-01-01 00:02:37.893396 | orchestrator | 2026-01-01 00:02:37.893400 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-01 00:02:37.893404 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:37.893408 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893412 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893417 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893461 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.893466 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893471 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-01 00:02:37.893475 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893479 | orchestrator | + size = 80 2026-01-01 00:02:37.893483 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893488 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893492 | orchestrator | } 2026-01-01 00:02:37.893496 | orchestrator | 2026-01-01 00:02:37.893500 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-01 00:02:37.893504 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:37.893509 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893513 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893517 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893521 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.893525 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893534 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-01 00:02:37.893539 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893543 | orchestrator | + size = 80 2026-01-01 00:02:37.893547 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893551 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893556 | orchestrator | } 2026-01-01 00:02:37.893561 | orchestrator | 2026-01-01 00:02:37.893566 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-01 00:02:37.893571 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:37.893576 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893582 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893586 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893594 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.893598 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893603 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-01 00:02:37.893607 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893611 | orchestrator | + size = 80 2026-01-01 00:02:37.893615 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893619 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893623 | orchestrator | } 2026-01-01 00:02:37.893627 | orchestrator | 2026-01-01 00:02:37.893632 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-01 00:02:37.893644 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:37.893649 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893653 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893657 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893661 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.893665 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893670 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-01 00:02:37.893674 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893678 | orchestrator | + size = 80 2026-01-01 00:02:37.893682 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893687 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893691 | orchestrator | } 2026-01-01 00:02:37.893695 | orchestrator | 2026-01-01 00:02:37.893700 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-01 00:02:37.893706 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:37.893710 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893714 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893718 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893722 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893726 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-01 00:02:37.893730 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893734 | orchestrator | + size = 20 2026-01-01 00:02:37.893738 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893742 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893745 | orchestrator | } 2026-01-01 00:02:37.893749 | orchestrator | 2026-01-01 00:02:37.893753 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-01 00:02:37.893757 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:37.893761 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893765 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893769 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893773 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893777 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-01 00:02:37.893780 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893784 | orchestrator | + size = 20 2026-01-01 00:02:37.893788 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893792 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893796 | orchestrator | } 2026-01-01 00:02:37.893799 | orchestrator | 2026-01-01 00:02:37.893803 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-01 00:02:37.893807 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:37.893811 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893815 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893819 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893823 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893826 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-01 00:02:37.893830 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893837 | orchestrator | + size = 20 2026-01-01 00:02:37.893841 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893845 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893849 | orchestrator | } 2026-01-01 00:02:37.893853 | orchestrator | 2026-01-01 00:02:37.893856 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-01 00:02:37.893860 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:37.893864 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893868 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893872 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893876 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893879 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-01 00:02:37.893883 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893888 | orchestrator | + size = 20 2026-01-01 00:02:37.893893 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893898 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893902 | orchestrator | } 2026-01-01 00:02:37.893907 | orchestrator | 2026-01-01 00:02:37.893912 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-01 00:02:37.893917 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:37.893920 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893924 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893928 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893932 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.893936 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-01 00:02:37.893940 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.893946 | orchestrator | + size = 20 2026-01-01 00:02:37.893950 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.893966 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.893970 | orchestrator | } 2026-01-01 00:02:37.893974 | orchestrator | 2026-01-01 00:02:37.893978 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-01 00:02:37.893982 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:37.893986 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.893989 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.893994 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.893998 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.894001 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-01 00:02:37.894005 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.894009 | orchestrator | + size = 20 2026-01-01 00:02:37.894038 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.894043 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.894047 | orchestrator | } 2026-01-01 00:02:37.894051 | orchestrator | 2026-01-01 00:02:37.894055 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-01 00:02:37.894059 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:37.894063 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.894067 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.894071 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.894078 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.894083 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-01 00:02:37.894086 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.894090 | orchestrator | + size = 20 2026-01-01 00:02:37.894094 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.894098 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.894102 | orchestrator | } 2026-01-01 00:02:37.894106 | orchestrator | 2026-01-01 00:02:37.894109 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-01 00:02:37.894113 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:37.894121 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.894129 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.894133 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.894137 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.894140 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-01 00:02:37.894144 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.894148 | orchestrator | + size = 20 2026-01-01 00:02:37.894152 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.894156 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.894160 | orchestrator | } 2026-01-01 00:02:37.894164 | orchestrator | 2026-01-01 00:02:37.894167 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-01 00:02:37.894171 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:37.894175 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:37.894179 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.894183 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.894186 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:37.894190 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-01 00:02:37.894194 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.894198 | orchestrator | + size = 20 2026-01-01 00:02:37.894202 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:37.894205 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:37.894209 | orchestrator | } 2026-01-01 00:02:37.894213 | orchestrator | 2026-01-01 00:02:37.894217 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-01 00:02:37.894221 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-01 00:02:37.894224 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:37.894228 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:37.894233 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:37.894238 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.894242 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.894247 | orchestrator | + config_drive = true 2026-01-01 00:02:37.894252 | orchestrator | + created = (known after apply) 2026-01-01 00:02:37.894255 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:37.894260 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-01 00:02:37.894265 | orchestrator | + force_delete = false 2026-01-01 00:02:37.894269 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:37.894274 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.894278 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.894282 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:37.894286 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:37.894290 | orchestrator | + name = "testbed-manager" 2026-01-01 00:02:37.894293 | orchestrator | + power_state = "active" 2026-01-01 00:02:37.894297 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.894301 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:37.894305 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:37.894309 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:37.894312 | orchestrator | + user_data = (sensitive value) 2026-01-01 00:02:37.894316 | orchestrator | 2026-01-01 00:02:37.894320 | orchestrator | + block_device { 2026-01-01 00:02:37.894324 | orchestrator | + boot_index = 0 2026-01-01 00:02:37.894328 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:37.894334 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:37.894338 | orchestrator | + multiattach = false 2026-01-01 00:02:37.894342 | orchestrator | + source_type = "volume" 2026-01-01 00:02:37.894346 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.894353 | orchestrator | } 2026-01-01 00:02:37.894357 | orchestrator | 2026-01-01 00:02:37.894361 | orchestrator | + network { 2026-01-01 00:02:37.894365 | orchestrator | + access_network = false 2026-01-01 00:02:37.894369 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:37.894373 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:37.894376 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:37.894380 | orchestrator | + name = (known after apply) 2026-01-01 00:02:37.894384 | orchestrator | + port = (known after apply) 2026-01-01 00:02:37.894388 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.894392 | orchestrator | } 2026-01-01 00:02:37.894396 | orchestrator | } 2026-01-01 00:02:37.894399 | orchestrator | 2026-01-01 00:02:37.894403 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-01 00:02:37.894407 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:37.894411 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:37.894415 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:37.894418 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:37.894431 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.894436 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.894439 | orchestrator | + config_drive = true 2026-01-01 00:02:37.894443 | orchestrator | + created = (known after apply) 2026-01-01 00:02:37.894447 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:37.894451 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:37.894455 | orchestrator | + force_delete = false 2026-01-01 00:02:37.894459 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:37.894463 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.894466 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.894470 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:37.894474 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:37.894478 | orchestrator | + name = "testbed-node-0" 2026-01-01 00:02:37.894482 | orchestrator | + power_state = "active" 2026-01-01 00:02:37.894488 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.894492 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:37.894496 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:37.894500 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:37.894504 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:37.894508 | orchestrator | 2026-01-01 00:02:37.894512 | orchestrator | + block_device { 2026-01-01 00:02:37.894516 | orchestrator | + boot_index = 0 2026-01-01 00:02:37.894519 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:37.894523 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:37.894527 | orchestrator | + multiattach = false 2026-01-01 00:02:37.894531 | orchestrator | + source_type = "volume" 2026-01-01 00:02:37.894535 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.894539 | orchestrator | } 2026-01-01 00:02:37.894543 | orchestrator | 2026-01-01 00:02:37.894546 | orchestrator | + network { 2026-01-01 00:02:37.894550 | orchestrator | + access_network = false 2026-01-01 00:02:37.894554 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:37.894558 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:37.894562 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:37.894566 | orchestrator | + name = (known after apply) 2026-01-01 00:02:37.894570 | orchestrator | + port = (known after apply) 2026-01-01 00:02:37.894573 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.894578 | orchestrator | } 2026-01-01 00:02:37.894583 | orchestrator | } 2026-01-01 00:02:37.894588 | orchestrator | 2026-01-01 00:02:37.894593 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-01 00:02:37.894598 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:37.894602 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:37.894616 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:37.894619 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:37.894623 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.894627 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.894631 | orchestrator | + config_drive = true 2026-01-01 00:02:37.894635 | orchestrator | + created = (known after apply) 2026-01-01 00:02:37.894638 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:37.894642 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:37.894646 | orchestrator | + force_delete = false 2026-01-01 00:02:37.894650 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:37.894654 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.894657 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.894661 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:37.894665 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:37.894669 | orchestrator | + name = "testbed-node-1" 2026-01-01 00:02:37.894673 | orchestrator | + power_state = "active" 2026-01-01 00:02:37.894676 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.894680 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:37.894684 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:37.894688 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:37.894692 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:37.894695 | orchestrator | 2026-01-01 00:02:37.894699 | orchestrator | + block_device { 2026-01-01 00:02:37.894703 | orchestrator | + boot_index = 0 2026-01-01 00:02:37.894707 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:37.894711 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:37.894714 | orchestrator | + multiattach = false 2026-01-01 00:02:37.894718 | orchestrator | + source_type = "volume" 2026-01-01 00:02:37.894722 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.894726 | orchestrator | } 2026-01-01 00:02:37.894730 | orchestrator | 2026-01-01 00:02:37.894734 | orchestrator | + network { 2026-01-01 00:02:37.894737 | orchestrator | + access_network = false 2026-01-01 00:02:37.894741 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:37.894745 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:37.894749 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:37.894753 | orchestrator | + name = (known after apply) 2026-01-01 00:02:37.894756 | orchestrator | + port = (known after apply) 2026-01-01 00:02:37.894760 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.894764 | orchestrator | } 2026-01-01 00:02:37.894768 | orchestrator | } 2026-01-01 00:02:37.894771 | orchestrator | 2026-01-01 00:02:37.894775 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-01 00:02:37.894779 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:37.894783 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:37.894787 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:37.894791 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:37.894795 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.894801 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.894805 | orchestrator | + config_drive = true 2026-01-01 00:02:37.894809 | orchestrator | + created = (known after apply) 2026-01-01 00:02:37.894813 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:37.894817 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:37.894821 | orchestrator | + force_delete = false 2026-01-01 00:02:37.894824 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:37.894828 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.894832 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.894838 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:37.894842 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:37.894846 | orchestrator | + name = "testbed-node-2" 2026-01-01 00:02:37.894851 | orchestrator | + power_state = "active" 2026-01-01 00:02:37.894855 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.894860 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:37.894865 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:37.894870 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:37.894875 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:37.894880 | orchestrator | 2026-01-01 00:02:37.894884 | orchestrator | + block_device { 2026-01-01 00:02:37.894888 | orchestrator | + boot_index = 0 2026-01-01 00:02:37.894891 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:37.894895 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:37.894902 | orchestrator | + multiattach = false 2026-01-01 00:02:37.894906 | orchestrator | + source_type = "volume" 2026-01-01 00:02:37.894910 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.894913 | orchestrator | } 2026-01-01 00:02:37.894917 | orchestrator | 2026-01-01 00:02:37.894921 | orchestrator | + network { 2026-01-01 00:02:37.894925 | orchestrator | + access_network = false 2026-01-01 00:02:37.894929 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:37.894933 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:37.894936 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:37.894940 | orchestrator | + name = (known after apply) 2026-01-01 00:02:37.894944 | orchestrator | + port = (known after apply) 2026-01-01 00:02:37.894948 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.894952 | orchestrator | } 2026-01-01 00:02:37.894956 | orchestrator | } 2026-01-01 00:02:37.894959 | orchestrator | 2026-01-01 00:02:37.894963 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-01 00:02:37.894967 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:37.894971 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:37.894975 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:37.894979 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:37.894982 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.894986 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.894990 | orchestrator | + config_drive = true 2026-01-01 00:02:37.894994 | orchestrator | + created = (known after apply) 2026-01-01 00:02:37.894998 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:37.895002 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:37.895006 | orchestrator | + force_delete = false 2026-01-01 00:02:37.895009 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:37.895013 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895017 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.895021 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:37.895025 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:37.895028 | orchestrator | + name = "testbed-node-3" 2026-01-01 00:02:37.895032 | orchestrator | + power_state = "active" 2026-01-01 00:02:37.895036 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895040 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:37.895044 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:37.895047 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:37.895051 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:37.895055 | orchestrator | 2026-01-01 00:02:37.895059 | orchestrator | + block_device { 2026-01-01 00:02:37.895065 | orchestrator | + boot_index = 0 2026-01-01 00:02:37.895069 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:37.895073 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:37.895080 | orchestrator | + multiattach = false 2026-01-01 00:02:37.895084 | orchestrator | + source_type = "volume" 2026-01-01 00:02:37.895088 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.895092 | orchestrator | } 2026-01-01 00:02:37.895096 | orchestrator | 2026-01-01 00:02:37.895100 | orchestrator | + network { 2026-01-01 00:02:37.895103 | orchestrator | + access_network = false 2026-01-01 00:02:37.895107 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:37.895111 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:37.895115 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:37.895119 | orchestrator | + name = (known after apply) 2026-01-01 00:02:37.895123 | orchestrator | + port = (known after apply) 2026-01-01 00:02:37.895126 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.895130 | orchestrator | } 2026-01-01 00:02:37.895134 | orchestrator | } 2026-01-01 00:02:37.895138 | orchestrator | 2026-01-01 00:02:37.895142 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-01 00:02:37.895146 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:37.895150 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:37.895153 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:37.895157 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:37.895161 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.895165 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.895169 | orchestrator | + config_drive = true 2026-01-01 00:02:37.895173 | orchestrator | + created = (known after apply) 2026-01-01 00:02:37.895176 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:37.895180 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:37.895184 | orchestrator | + force_delete = false 2026-01-01 00:02:37.895188 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:37.895192 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895195 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.895199 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:37.895204 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:37.895208 | orchestrator | + name = "testbed-node-4" 2026-01-01 00:02:37.895213 | orchestrator | + power_state = "active" 2026-01-01 00:02:37.895218 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895222 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:37.895227 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:37.895231 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:37.895235 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:37.895239 | orchestrator | 2026-01-01 00:02:37.895242 | orchestrator | + block_device { 2026-01-01 00:02:37.895246 | orchestrator | + boot_index = 0 2026-01-01 00:02:37.895250 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:37.895254 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:37.895258 | orchestrator | + multiattach = false 2026-01-01 00:02:37.895262 | orchestrator | + source_type = "volume" 2026-01-01 00:02:37.895266 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.895269 | orchestrator | } 2026-01-01 00:02:37.895273 | orchestrator | 2026-01-01 00:02:37.895277 | orchestrator | + network { 2026-01-01 00:02:37.895281 | orchestrator | + access_network = false 2026-01-01 00:02:37.895285 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:37.895289 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:37.895292 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:37.895296 | orchestrator | + name = (known after apply) 2026-01-01 00:02:37.895300 | orchestrator | + port = (known after apply) 2026-01-01 00:02:37.895306 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.895310 | orchestrator | } 2026-01-01 00:02:37.895314 | orchestrator | } 2026-01-01 00:02:37.895321 | orchestrator | 2026-01-01 00:02:37.895325 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-01 00:02:37.895329 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:37.895333 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:37.895337 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:37.895341 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:37.895345 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.895349 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:37.895353 | orchestrator | + config_drive = true 2026-01-01 00:02:37.895356 | orchestrator | + created = (known after apply) 2026-01-01 00:02:37.895360 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:37.895364 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:37.895368 | orchestrator | + force_delete = false 2026-01-01 00:02:37.895375 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:37.895378 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895382 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:37.895386 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:37.895390 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:37.895394 | orchestrator | + name = "testbed-node-5" 2026-01-01 00:02:37.895398 | orchestrator | + power_state = "active" 2026-01-01 00:02:37.895401 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895405 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:37.895409 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:37.895413 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:37.895417 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:37.895420 | orchestrator | 2026-01-01 00:02:37.895434 | orchestrator | + block_device { 2026-01-01 00:02:37.895438 | orchestrator | + boot_index = 0 2026-01-01 00:02:37.895442 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:37.895445 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:37.895449 | orchestrator | + multiattach = false 2026-01-01 00:02:37.895453 | orchestrator | + source_type = "volume" 2026-01-01 00:02:37.895457 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.895461 | orchestrator | } 2026-01-01 00:02:37.895464 | orchestrator | 2026-01-01 00:02:37.895468 | orchestrator | + network { 2026-01-01 00:02:37.895472 | orchestrator | + access_network = false 2026-01-01 00:02:37.895476 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:37.895480 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:37.895484 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:37.895488 | orchestrator | + name = (known after apply) 2026-01-01 00:02:37.895492 | orchestrator | + port = (known after apply) 2026-01-01 00:02:37.895495 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:37.895499 | orchestrator | } 2026-01-01 00:02:37.895503 | orchestrator | } 2026-01-01 00:02:37.895507 | orchestrator | 2026-01-01 00:02:37.895511 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-01 00:02:37.895515 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-01 00:02:37.895519 | orchestrator | + fingerprint = (known after apply) 2026-01-01 00:02:37.895522 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895526 | orchestrator | + name = "testbed" 2026-01-01 00:02:37.895530 | orchestrator | + private_key = (sensitive value) 2026-01-01 00:02:37.895534 | orchestrator | + public_key = (known after apply) 2026-01-01 00:02:37.895539 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895544 | orchestrator | + user_id = (known after apply) 2026-01-01 00:02:37.895549 | orchestrator | } 2026-01-01 00:02:37.895556 | orchestrator | 2026-01-01 00:02:37.895564 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-01 00:02:37.895576 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:37.895584 | orchestrator | + device = (known after apply) 2026-01-01 00:02:37.895588 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895591 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:37.895595 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895599 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:37.895603 | orchestrator | } 2026-01-01 00:02:37.895607 | orchestrator | 2026-01-01 00:02:37.895611 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-01 00:02:37.895614 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:37.895618 | orchestrator | + device = (known after apply) 2026-01-01 00:02:37.895622 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895626 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:37.895629 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895633 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:37.895637 | orchestrator | } 2026-01-01 00:02:37.895641 | orchestrator | 2026-01-01 00:02:37.895645 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-01 00:02:37.895649 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:37.895652 | orchestrator | + device = (known after apply) 2026-01-01 00:02:37.895656 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895660 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:37.895664 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895668 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:37.895671 | orchestrator | } 2026-01-01 00:02:37.895675 | orchestrator | 2026-01-01 00:02:37.895679 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-01 00:02:37.895683 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:37.895687 | orchestrator | + device = (known after apply) 2026-01-01 00:02:37.895691 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895695 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:37.895698 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895702 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:37.895706 | orchestrator | } 2026-01-01 00:02:37.895710 | orchestrator | 2026-01-01 00:02:37.895714 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-01 00:02:37.895718 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:37.895721 | orchestrator | + device = (known after apply) 2026-01-01 00:02:37.895725 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895729 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:37.895735 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895742 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:37.895746 | orchestrator | } 2026-01-01 00:02:37.895750 | orchestrator | 2026-01-01 00:02:37.895754 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-01 00:02:37.895758 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:37.895761 | orchestrator | + device = (known after apply) 2026-01-01 00:02:37.895765 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895769 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:37.895773 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895777 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:37.895781 | orchestrator | } 2026-01-01 00:02:37.895784 | orchestrator | 2026-01-01 00:02:37.895788 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-01 00:02:37.895792 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:37.895796 | orchestrator | + device = (known after apply) 2026-01-01 00:02:37.895800 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895804 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:37.895807 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895815 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:37.895819 | orchestrator | } 2026-01-01 00:02:37.895823 | orchestrator | 2026-01-01 00:02:37.895827 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-01 00:02:37.895831 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:37.895835 | orchestrator | + device = (known after apply) 2026-01-01 00:02:37.895838 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895842 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:37.895846 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895850 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:37.895854 | orchestrator | } 2026-01-01 00:02:37.895858 | orchestrator | 2026-01-01 00:02:37.895862 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-01 00:02:37.895866 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:37.895870 | orchestrator | + device = (known after apply) 2026-01-01 00:02:37.895874 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895877 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:37.895881 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895885 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:37.895889 | orchestrator | } 2026-01-01 00:02:37.895893 | orchestrator | 2026-01-01 00:02:37.895898 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-01 00:02:37.895904 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-01 00:02:37.895909 | orchestrator | + fixed_ip = (known after apply) 2026-01-01 00:02:37.895913 | orchestrator | + floating_ip = (known after apply) 2026-01-01 00:02:37.895918 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895922 | orchestrator | + port_id = (known after apply) 2026-01-01 00:02:37.895925 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.895929 | orchestrator | } 2026-01-01 00:02:37.895933 | orchestrator | 2026-01-01 00:02:37.895937 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-01 00:02:37.895940 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-01 00:02:37.895944 | orchestrator | + address = (known after apply) 2026-01-01 00:02:37.895948 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.895952 | orchestrator | + dns_domain = (known after apply) 2026-01-01 00:02:37.895956 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:37.895959 | orchestrator | + fixed_ip = (known after apply) 2026-01-01 00:02:37.895963 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.895967 | orchestrator | + pool = "public" 2026-01-01 00:02:37.895971 | orchestrator | + port_id = (known after apply) 2026-01-01 00:02:37.896001 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.896006 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:37.896010 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.896013 | orchestrator | } 2026-01-01 00:02:37.896017 | orchestrator | 2026-01-01 00:02:37.896021 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-01 00:02:37.896025 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-01 00:02:37.896029 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:37.896033 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.896036 | orchestrator | + availability_zone_hints = [ 2026-01-01 00:02:37.896040 | orchestrator | + "nova", 2026-01-01 00:02:37.896044 | orchestrator | ] 2026-01-01 00:02:37.896048 | orchestrator | + dns_domain = (known after apply) 2026-01-01 00:02:37.896052 | orchestrator | + external = (known after apply) 2026-01-01 00:02:37.896056 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.896059 | orchestrator | + mtu = (known after apply) 2026-01-01 00:02:37.896063 | orchestrator | + name = "net-testbed-management" 2026-01-01 00:02:37.896067 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:37.896075 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:37.896078 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.896082 | orchestrator | + shared = (known after apply) 2026-01-01 00:02:37.896086 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.896090 | orchestrator | + transparent_vlan = (known after apply) 2026-01-01 00:02:37.896094 | orchestrator | 2026-01-01 00:02:37.896098 | orchestrator | + segments (known after apply) 2026-01-01 00:02:37.896102 | orchestrator | } 2026-01-01 00:02:37.896105 | orchestrator | 2026-01-01 00:02:37.896109 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-01 00:02:37.896113 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-01 00:02:37.896117 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:37.896121 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:37.896125 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:37.896131 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.896135 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:37.896139 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:37.896143 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:37.896147 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:37.896154 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.896158 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:37.896162 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:37.896166 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:37.896170 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:37.896174 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.896177 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:37.896181 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.896185 | orchestrator | 2026-01-01 00:02:37.896189 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896193 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:37.896197 | orchestrator | } 2026-01-01 00:02:37.896200 | orchestrator | 2026-01-01 00:02:37.896205 | orchestrator | + binding (known after apply) 2026-01-01 00:02:37.896209 | orchestrator | 2026-01-01 00:02:37.896213 | orchestrator | + fixed_ip { 2026-01-01 00:02:37.896217 | orchestrator | + ip_address = "192.168.16.5" 2026-01-01 00:02:37.896220 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:37.896224 | orchestrator | } 2026-01-01 00:02:37.896228 | orchestrator | } 2026-01-01 00:02:37.896232 | orchestrator | 2026-01-01 00:02:37.896236 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-01 00:02:37.896239 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:37.896243 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:37.896247 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:37.896252 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:37.896256 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.896261 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:37.896266 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:37.896271 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:37.896276 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:37.896281 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.896284 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:37.896289 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:37.896293 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:37.896297 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:37.896301 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.896308 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:37.896312 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.896316 | orchestrator | 2026-01-01 00:02:37.896319 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896323 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:37.896327 | orchestrator | } 2026-01-01 00:02:37.896331 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896335 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:37.896339 | orchestrator | } 2026-01-01 00:02:37.896343 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896346 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:37.896350 | orchestrator | } 2026-01-01 00:02:37.896354 | orchestrator | 2026-01-01 00:02:37.896358 | orchestrator | + binding (known after apply) 2026-01-01 00:02:37.896362 | orchestrator | 2026-01-01 00:02:37.896366 | orchestrator | + fixed_ip { 2026-01-01 00:02:37.896370 | orchestrator | + ip_address = "192.168.16.10" 2026-01-01 00:02:37.896374 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:37.896377 | orchestrator | } 2026-01-01 00:02:37.896381 | orchestrator | } 2026-01-01 00:02:37.896385 | orchestrator | 2026-01-01 00:02:37.896389 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-01 00:02:37.896393 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:37.896396 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:37.896400 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:37.896404 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:37.896408 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.896412 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:37.896415 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:37.896419 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:37.896448 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:37.896452 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.896456 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:37.896460 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:37.896464 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:37.896468 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:37.896471 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.896475 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:37.896479 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.896483 | orchestrator | 2026-01-01 00:02:37.896487 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896490 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:37.896494 | orchestrator | } 2026-01-01 00:02:37.896498 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896502 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:37.896506 | orchestrator | } 2026-01-01 00:02:37.896510 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896513 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:37.896517 | orchestrator | } 2026-01-01 00:02:37.896521 | orchestrator | 2026-01-01 00:02:37.896525 | orchestrator | + binding (known after apply) 2026-01-01 00:02:37.896529 | orchestrator | 2026-01-01 00:02:37.896532 | orchestrator | + fixed_ip { 2026-01-01 00:02:37.896536 | orchestrator | + ip_address = "192.168.16.11" 2026-01-01 00:02:37.896540 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:37.896544 | orchestrator | } 2026-01-01 00:02:37.896548 | orchestrator | } 2026-01-01 00:02:37.896552 | orchestrator | 2026-01-01 00:02:37.896555 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-01 00:02:37.896559 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:37.896563 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:37.896567 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:37.896571 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:37.896575 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.896582 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:37.896585 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:37.896589 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:37.896593 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:37.896600 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.896607 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:37.896611 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:37.896616 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:37.896621 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:37.896625 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.896630 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:37.896635 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.896639 | orchestrator | 2026-01-01 00:02:37.896643 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896647 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:37.896651 | orchestrator | } 2026-01-01 00:02:37.896655 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896659 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:37.896662 | orchestrator | } 2026-01-01 00:02:37.896666 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896670 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:37.896674 | orchestrator | } 2026-01-01 00:02:37.896678 | orchestrator | 2026-01-01 00:02:37.896682 | orchestrator | + binding (known after apply) 2026-01-01 00:02:37.896685 | orchestrator | 2026-01-01 00:02:37.896689 | orchestrator | + fixed_ip { 2026-01-01 00:02:37.896693 | orchestrator | + ip_address = "192.168.16.12" 2026-01-01 00:02:37.896697 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:37.896701 | orchestrator | } 2026-01-01 00:02:37.896705 | orchestrator | } 2026-01-01 00:02:37.896709 | orchestrator | 2026-01-01 00:02:37.896712 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-01 00:02:37.896716 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:37.896720 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:37.896724 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:37.896728 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:37.896732 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.896735 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:37.896739 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:37.896743 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:37.896747 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:37.896751 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.896755 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:37.896759 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:37.896762 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:37.896766 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:37.896770 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.896774 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:37.896778 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.896782 | orchestrator | 2026-01-01 00:02:37.896786 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896790 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:37.896794 | orchestrator | } 2026-01-01 00:02:37.896797 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896801 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:37.896805 | orchestrator | } 2026-01-01 00:02:37.896809 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896813 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:37.896817 | orchestrator | } 2026-01-01 00:02:37.896821 | orchestrator | 2026-01-01 00:02:37.896829 | orchestrator | + binding (known after apply) 2026-01-01 00:02:37.896833 | orchestrator | 2026-01-01 00:02:37.896837 | orchestrator | + fixed_ip { 2026-01-01 00:02:37.896840 | orchestrator | + ip_address = "192.168.16.13" 2026-01-01 00:02:37.896844 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:37.896848 | orchestrator | } 2026-01-01 00:02:37.896852 | orchestrator | } 2026-01-01 00:02:37.896856 | orchestrator | 2026-01-01 00:02:37.896859 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-01 00:02:37.896863 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:37.896867 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:37.896871 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:37.896875 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:37.896879 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.896883 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:37.896886 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:37.896890 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:37.896894 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:37.896898 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.896902 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:37.896905 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:37.896909 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:37.896913 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:37.896917 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.896921 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:37.896925 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.896930 | orchestrator | 2026-01-01 00:02:37.896934 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896937 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:37.896941 | orchestrator | } 2026-01-01 00:02:37.896945 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896949 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:37.896953 | orchestrator | } 2026-01-01 00:02:37.896956 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.896961 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:37.896966 | orchestrator | } 2026-01-01 00:02:37.896971 | orchestrator | 2026-01-01 00:02:37.896975 | orchestrator | + binding (known after apply) 2026-01-01 00:02:37.896979 | orchestrator | 2026-01-01 00:02:37.896985 | orchestrator | + fixed_ip { 2026-01-01 00:02:37.896989 | orchestrator | + ip_address = "192.168.16.14" 2026-01-01 00:02:37.896994 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:37.896998 | orchestrator | } 2026-01-01 00:02:37.897002 | orchestrator | } 2026-01-01 00:02:37.897006 | orchestrator | 2026-01-01 00:02:37.897009 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-01 00:02:37.897013 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:37.897017 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:37.897021 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:37.897025 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:37.897029 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.897032 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:37.897036 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:37.897043 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:37.897047 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:37.897051 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897055 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:37.897074 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:37.897078 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:37.897082 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:37.897089 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897093 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:37.897097 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897101 | orchestrator | 2026-01-01 00:02:37.897105 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.897109 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:37.897113 | orchestrator | } 2026-01-01 00:02:37.897117 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.897121 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:37.897125 | orchestrator | } 2026-01-01 00:02:37.897129 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:37.897133 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:37.897136 | orchestrator | } 2026-01-01 00:02:37.897140 | orchestrator | 2026-01-01 00:02:37.897147 | orchestrator | + binding (known after apply) 2026-01-01 00:02:37.897151 | orchestrator | 2026-01-01 00:02:37.897155 | orchestrator | + fixed_ip { 2026-01-01 00:02:37.897159 | orchestrator | + ip_address = "192.168.16.15" 2026-01-01 00:02:37.897162 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:37.897166 | orchestrator | } 2026-01-01 00:02:37.897170 | orchestrator | } 2026-01-01 00:02:37.897174 | orchestrator | 2026-01-01 00:02:37.897178 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-01 00:02:37.897182 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-01 00:02:37.897186 | orchestrator | + force_destroy = false 2026-01-01 00:02:37.897190 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897194 | orchestrator | + port_id = (known after apply) 2026-01-01 00:02:37.897198 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897203 | orchestrator | + router_id = (known after apply) 2026-01-01 00:02:37.897207 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:37.897229 | orchestrator | } 2026-01-01 00:02:37.897235 | orchestrator | 2026-01-01 00:02:37.897239 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-01 00:02:37.897242 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-01 00:02:37.897246 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:37.897250 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.897254 | orchestrator | + availability_zone_hints = [ 2026-01-01 00:02:37.897258 | orchestrator | + "nova", 2026-01-01 00:02:37.897262 | orchestrator | ] 2026-01-01 00:02:37.897265 | orchestrator | + distributed = (known after apply) 2026-01-01 00:02:37.897269 | orchestrator | + enable_snat = (known after apply) 2026-01-01 00:02:37.897273 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-01 00:02:37.897277 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-01 00:02:37.897281 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897285 | orchestrator | + name = "testbed" 2026-01-01 00:02:37.897289 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897292 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897296 | orchestrator | 2026-01-01 00:02:37.897302 | orchestrator | + external_fixed_ip (known after apply) 2026-01-01 00:02:37.897308 | orchestrator | } 2026-01-01 00:02:37.897313 | orchestrator | 2026-01-01 00:02:37.897317 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-01 00:02:37.897322 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-01 00:02:37.897325 | orchestrator | + description = "ssh" 2026-01-01 00:02:37.897329 | orchestrator | + direction = "ingress" 2026-01-01 00:02:37.897333 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:37.897338 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897342 | orchestrator | + port_range_max = 22 2026-01-01 00:02:37.897346 | orchestrator | + port_range_min = 22 2026-01-01 00:02:37.897350 | orchestrator | + protocol = "tcp" 2026-01-01 00:02:37.897353 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897361 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:37.897365 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:37.897369 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:37.897373 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:37.897376 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897380 | orchestrator | } 2026-01-01 00:02:37.897384 | orchestrator | 2026-01-01 00:02:37.897388 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-01 00:02:37.897392 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-01 00:02:37.897396 | orchestrator | + description = "wireguard" 2026-01-01 00:02:37.897400 | orchestrator | + direction = "ingress" 2026-01-01 00:02:37.897403 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:37.897407 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897411 | orchestrator | + port_range_max = 51820 2026-01-01 00:02:37.897415 | orchestrator | + port_range_min = 51820 2026-01-01 00:02:37.897418 | orchestrator | + protocol = "udp" 2026-01-01 00:02:37.897432 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897436 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:37.897440 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:37.897443 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:37.897447 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:37.897451 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897455 | orchestrator | } 2026-01-01 00:02:37.897459 | orchestrator | 2026-01-01 00:02:37.897463 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-01 00:02:37.897467 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-01 00:02:37.897471 | orchestrator | + direction = "ingress" 2026-01-01 00:02:37.897475 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:37.897478 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897482 | orchestrator | + protocol = "tcp" 2026-01-01 00:02:37.897490 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897494 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:37.897498 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:37.897501 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-01 00:02:37.897505 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:37.897509 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897513 | orchestrator | } 2026-01-01 00:02:37.897517 | orchestrator | 2026-01-01 00:02:37.897520 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-01 00:02:37.897524 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-01 00:02:37.897528 | orchestrator | + direction = "ingress" 2026-01-01 00:02:37.897534 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:37.897538 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897543 | orchestrator | + protocol = "udp" 2026-01-01 00:02:37.897548 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897553 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:37.897557 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:37.897562 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-01 00:02:37.897567 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:37.897571 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897575 | orchestrator | } 2026-01-01 00:02:37.897578 | orchestrator | 2026-01-01 00:02:37.897582 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-01 00:02:37.897589 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-01 00:02:37.897593 | orchestrator | + direction = "ingress" 2026-01-01 00:02:37.897597 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:37.897600 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897604 | orchestrator | + protocol = "icmp" 2026-01-01 00:02:37.897608 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897612 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:37.897615 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:37.897619 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:37.897623 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:37.897627 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897631 | orchestrator | } 2026-01-01 00:02:37.897634 | orchestrator | 2026-01-01 00:02:37.897638 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-01 00:02:37.897642 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-01 00:02:37.897646 | orchestrator | + direction = "ingress" 2026-01-01 00:02:37.897650 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:37.897654 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897657 | orchestrator | + protocol = "tcp" 2026-01-01 00:02:37.897661 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897665 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:37.897672 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:37.897676 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:37.897679 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:37.897683 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897687 | orchestrator | } 2026-01-01 00:02:37.897691 | orchestrator | 2026-01-01 00:02:37.897695 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-01 00:02:37.897699 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-01 00:02:37.897703 | orchestrator | + direction = "ingress" 2026-01-01 00:02:37.897706 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:37.897710 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897714 | orchestrator | + protocol = "udp" 2026-01-01 00:02:37.897718 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897722 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:37.897726 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:37.897729 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:37.897733 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:37.897737 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897741 | orchestrator | } 2026-01-01 00:02:37.897745 | orchestrator | 2026-01-01 00:02:37.897748 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-01 00:02:37.897752 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-01 00:02:37.897756 | orchestrator | + direction = "ingress" 2026-01-01 00:02:37.897763 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:37.897767 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897770 | orchestrator | + protocol = "icmp" 2026-01-01 00:02:37.897774 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897778 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:37.897782 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:37.897786 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:37.897790 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:37.897794 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897801 | orchestrator | } 2026-01-01 00:02:37.897805 | orchestrator | 2026-01-01 00:02:37.897808 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-01 00:02:37.897814 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-01 00:02:37.897818 | orchestrator | + description = "vrrp" 2026-01-01 00:02:37.897823 | orchestrator | + direction = "ingress" 2026-01-01 00:02:37.897828 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:37.897833 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897838 | orchestrator | + protocol = "112" 2026-01-01 00:02:37.897846 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897850 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:37.897854 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:37.897858 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:37.897862 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:37.897866 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897870 | orchestrator | } 2026-01-01 00:02:37.897874 | orchestrator | 2026-01-01 00:02:37.897878 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-01 00:02:37.897882 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-01 00:02:37.897886 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.897890 | orchestrator | + description = "management security group" 2026-01-01 00:02:37.897894 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897898 | orchestrator | + name = "testbed-management" 2026-01-01 00:02:37.897901 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897905 | orchestrator | + stateful = (known after apply) 2026-01-01 00:02:37.897909 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897913 | orchestrator | } 2026-01-01 00:02:37.897916 | orchestrator | 2026-01-01 00:02:37.897920 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-01 00:02:37.897924 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-01 00:02:37.897928 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.897932 | orchestrator | + description = "node security group" 2026-01-01 00:02:37.897936 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.897940 | orchestrator | + name = "testbed-node" 2026-01-01 00:02:37.897943 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.897947 | orchestrator | + stateful = (known after apply) 2026-01-01 00:02:37.897951 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.897955 | orchestrator | } 2026-01-01 00:02:37.897958 | orchestrator | 2026-01-01 00:02:37.897962 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-01 00:02:37.897966 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-01 00:02:37.897970 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:37.897974 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-01 00:02:37.897978 | orchestrator | + dns_nameservers = [ 2026-01-01 00:02:37.897982 | orchestrator | + "8.8.8.8", 2026-01-01 00:02:37.897986 | orchestrator | + "9.9.9.9", 2026-01-01 00:02:37.897989 | orchestrator | ] 2026-01-01 00:02:37.897993 | orchestrator | + enable_dhcp = true 2026-01-01 00:02:37.897997 | orchestrator | + gateway_ip = (known after apply) 2026-01-01 00:02:37.898001 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.898005 | orchestrator | + ip_version = 4 2026-01-01 00:02:37.898009 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-01 00:02:37.898213 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-01 00:02:37.898313 | orchestrator | + name = "subnet-testbed-management" 2026-01-01 00:02:37.898317 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:37.898340 | orchestrator | + no_gateway = false 2026-01-01 00:02:37.898345 | orchestrator | + region = (known after apply) 2026-01-01 00:02:37.898349 | orchestrator | + service_types = (known after apply) 2026-01-01 00:02:37.898359 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:37.898363 | orchestrator | 2026-01-01 00:02:37.901732 | orchestrator | + allocation_pool { 2026-01-01 00:02:37.901747 | orchestrator | + end = "192.168.31.250" 2026-01-01 00:02:37.901752 | orchestrator | + start = "192.168.31.200" 2026-01-01 00:02:37.901757 | orchestrator | } 2026-01-01 00:02:37.901762 | orchestrator | } 2026-01-01 00:02:37.901766 | orchestrator | 2026-01-01 00:02:37.901771 | orchestrator | # terraform_data.image will be created 2026-01-01 00:02:37.901775 | orchestrator | + resource "terraform_data" "image" { 2026-01-01 00:02:37.901779 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.901784 | orchestrator | + input = "Ubuntu 24.04" 2026-01-01 00:02:37.901789 | orchestrator | + output = (known after apply) 2026-01-01 00:02:37.901793 | orchestrator | } 2026-01-01 00:02:37.901798 | orchestrator | 2026-01-01 00:02:37.901803 | orchestrator | # terraform_data.image_node will be created 2026-01-01 00:02:37.901807 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-01 00:02:37.901812 | orchestrator | + id = (known after apply) 2026-01-01 00:02:37.901816 | orchestrator | + input = "Ubuntu 24.04" 2026-01-01 00:02:37.901821 | orchestrator | + output = (known after apply) 2026-01-01 00:02:37.901825 | orchestrator | } 2026-01-01 00:02:37.901830 | orchestrator | 2026-01-01 00:02:37.901834 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-01 00:02:37.901839 | orchestrator | 2026-01-01 00:02:37.901843 | orchestrator | Changes to Outputs: 2026-01-01 00:02:37.901847 | orchestrator | + manager_address = (sensitive value) 2026-01-01 00:02:37.901852 | orchestrator | + private_key = (sensitive value) 2026-01-01 00:02:38.199949 | orchestrator | terraform_data.image: Creating... 2026-01-01 00:02:38.200838 | orchestrator | terraform_data.image: Creation complete after 0s [id=736444e5-92e1-032b-9d89-bf4f83092f29] 2026-01-01 00:02:38.210091 | orchestrator | terraform_data.image_node: Creating... 2026-01-01 00:02:38.210148 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=3510df5f-063a-5a82-b12c-d8b0a73c1e4b] 2026-01-01 00:02:38.226871 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-01 00:02:38.226937 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-01 00:02:38.236259 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-01 00:02:38.250047 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-01 00:02:38.250075 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-01 00:02:38.250081 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-01 00:02:38.250085 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-01 00:02:38.250089 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-01 00:02:38.250093 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-01 00:02:38.250098 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-01 00:02:38.709765 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-01 00:02:38.714566 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-01 00:02:38.720578 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-01 00:02:38.727834 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-01 00:02:38.763026 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-01 00:02:38.769851 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-01 00:02:39.522164 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=f1ac3df8-d2c6-4b23-bb19-e38a7d196b76] 2026-01-01 00:02:39.534132 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-01 00:02:41.896693 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=915f467d-f7f4-452d-8987-30de089be54c] 2026-01-01 00:02:41.904398 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-01 00:02:41.910110 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=de0fd659-da77-4142-b79c-576ab421e122] 2026-01-01 00:02:41.928783 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-01 00:02:41.928839 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=4d1e5713-0224-44c3-b7e5-3534e8d42515] 2026-01-01 00:02:41.932669 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=d0081afa25f0bc6ddd290094573b51342729f07b] 2026-01-01 00:02:41.932715 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-01 00:02:41.938792 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-01 00:02:41.940590 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=c9c8c13d-3654-44d4-a582-9918730b81ab] 2026-01-01 00:02:41.968006 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-01 00:02:41.969769 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=5a98e6dc-3aee-4304-8fec-4f13a362fcb3] 2026-01-01 00:02:41.993906 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=76bcb63e-af93-4dd0-8322-d7e595f79a86] 2026-01-01 00:02:41.998760 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=20b388ad-01dd-4aa6-8ed7-7c845856acdf] 2026-01-01 00:02:41.998955 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-01 00:02:42.007001 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-01 00:02:42.011068 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=d4e1dc1059865cc7effa9cb78ecbe0406a88081e] 2026-01-01 00:02:42.019025 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-01 00:02:42.024509 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-01 00:02:42.031807 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=39cbb32c-70e6-49f5-af10-21eee2e163cb] 2026-01-01 00:02:42.055135 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=caf45da1-cd9a-4b0f-9c1b-eed918b4e62f] 2026-01-01 00:02:42.930954 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=4e7c7978-65f8-429f-801b-32837f185740] 2026-01-01 00:02:43.045680 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=1bbbc7b1-f261-4d1a-9bac-fe02e62c19b6] 2026-01-01 00:02:43.052919 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-01 00:02:45.373261 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=b1096fbb-20c9-48ea-8453-55839ef2db2e] 2026-01-01 00:02:45.416726 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=27dcfef2-d19c-4c37-9c2e-324f30ce5c65] 2026-01-01 00:02:45.429795 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=47ffa559-76d7-406a-af3c-e43e80efec30] 2026-01-01 00:02:45.457704 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=7d0a4f68-2481-476d-ba91-058fec8c2e3c] 2026-01-01 00:02:46.086667 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=c7126eb4-3df9-4966-9c87-58f79188c0e4] 2026-01-01 00:02:46.194386 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=434f30f3-358f-4fc6-b78f-2729b9d7ead8] 2026-01-01 00:02:47.071738 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=7cffdb4a-aa43-4730-9508-c8898732613f] 2026-01-01 00:02:47.077308 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-01 00:02:47.078129 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-01 00:02:47.081709 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-01 00:02:47.293217 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=4932961d-ffce-4ad5-af17-63a325273c53] 2026-01-01 00:02:47.302380 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-01 00:02:47.302575 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-01 00:02:47.303002 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-01 00:02:47.305414 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-01 00:02:47.309007 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-01 00:02:47.318134 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-01 00:02:47.318164 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-01 00:02:47.322596 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-01 00:02:47.333276 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4680e572-bb80-4ac5-be7d-5d4cbcf5bd45] 2026-01-01 00:02:47.340287 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-01 00:02:47.488548 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=37f1b997-0b1f-4e65-af93-b052d2394889] 2026-01-01 00:02:47.508509 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-01 00:02:47.673152 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=87123345-a5c0-4885-8810-2ea760e4b6bd] 2026-01-01 00:02:47.680526 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-01 00:02:47.967796 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=2e48dc31-0467-4445-a1f8-1d8782fd72f0] 2026-01-01 00:02:47.978605 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-01 00:02:47.984501 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=ea3a7a39-79fe-4a0b-89d3-34fa43a0bf1b] 2026-01-01 00:02:47.989019 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-01 00:02:48.077628 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=9894bcfe-70e6-4d02-985a-3eaf0a5a9fb1] 2026-01-01 00:02:48.085177 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-01 00:02:48.098896 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=398689b0-2158-444d-a994-3479913ce599] 2026-01-01 00:02:48.108328 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-01 00:02:48.167609 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=86a99773-648c-42e1-90ac-f0c7cca2b280] 2026-01-01 00:02:48.171338 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=7ece3ee2-bc87-4801-99d9-36d0158d60a6] 2026-01-01 00:02:48.173508 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-01 00:02:48.342816 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=8d20d08f-7961-4e1d-9b2c-0807676c1973] 2026-01-01 00:02:48.345506 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=3dc0171b-f919-48e3-8f2a-bae90aaf92d1] 2026-01-01 00:02:48.570189 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=7a231ef3-372b-4f29-95a8-4fc04d657a60] 2026-01-01 00:02:48.666617 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=72c122b5-953c-42ed-9b3d-ef3284107294] 2026-01-01 00:02:48.751952 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=fb11ad80-3940-4f2a-86c9-8dfb52725a52] 2026-01-01 00:02:48.790508 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=350df578-e3cb-4418-bff1-6f5642e04453] 2026-01-01 00:02:48.899125 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=be6a21a8-2a0e-4c32-8086-539e2aa6ef88] 2026-01-01 00:02:49.087691 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=95b47004-ce46-483f-8619-3b7783223e62] 2026-01-01 00:02:50.533322 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=b271a110-ba88-4a93-9099-8a087c7dfb53] 2026-01-01 00:02:50.575786 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-01 00:02:50.579580 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-01 00:02:50.580270 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-01 00:02:50.580977 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-01 00:02:50.583139 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-01 00:02:50.590091 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-01 00:02:50.608246 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-01 00:02:52.456873 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=6dfa3a84-1957-4393-be31-ff3c0d7d24ed] 2026-01-01 00:02:52.463575 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-01 00:02:52.468930 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-01 00:02:52.475254 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=df1a796b9be2a3fe3875cb7c3cab99360bec0582] 2026-01-01 00:02:52.478466 | orchestrator | local_file.inventory: Creating... 2026-01-01 00:02:52.484310 | orchestrator | local_file.inventory: Creation complete after 0s [id=05da58db7ff3adc5273efb3dd158e1f6561b55fe] 2026-01-01 00:02:53.807881 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=6dfa3a84-1957-4393-be31-ff3c0d7d24ed] 2026-01-01 00:03:00.581607 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-01 00:03:00.581727 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-01 00:03:00.581733 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-01 00:03:00.585990 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-01 00:03:00.596284 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-01 00:03:00.609698 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-01 00:03:10.582151 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-01 00:03:10.582263 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-01 00:03:10.582271 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-01 00:03:10.586332 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-01 00:03:10.597068 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-01 00:03:10.610289 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-01 00:03:11.183706 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=ad2073e3-3e49-49f1-82fc-4ad9785c980c] 2026-01-01 00:03:20.591649 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-01 00:03:20.591771 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-01 00:03:20.591787 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-01 00:03:20.598007 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-01 00:03:20.611549 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-01 00:03:21.492354 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=07d0102c-353a-40b5-b7ca-0e9db18f04ce] 2026-01-01 00:03:21.506261 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=8c657440-4a96-4978-84df-b54001bfe278] 2026-01-01 00:03:21.932248 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=4d26f62a-1f64-4a38-a14e-d41365b419f2] 2026-01-01 00:03:22.000541 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=b2747024-86e2-4d7d-96df-93e52b747428] 2026-01-01 00:03:22.024839 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=8097f0f6-6b0a-47c5-8d42-2e221a49b49a] 2026-01-01 00:03:22.053013 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-01 00:03:22.053548 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-01 00:03:22.056183 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-01 00:03:22.057718 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-01 00:03:22.058978 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-01 00:03:22.065403 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-01 00:03:22.072499 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4943568992860841351] 2026-01-01 00:03:22.084644 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-01 00:03:22.098802 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-01 00:03:22.099242 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-01 00:03:22.102449 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-01 00:03:22.121324 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-01 00:03:25.510054 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=4d26f62a-1f64-4a38-a14e-d41365b419f2/caf45da1-cd9a-4b0f-9c1b-eed918b4e62f] 2026-01-01 00:03:25.511852 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=8c657440-4a96-4978-84df-b54001bfe278/5a98e6dc-3aee-4304-8fec-4f13a362fcb3] 2026-01-01 00:03:25.546298 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=b2747024-86e2-4d7d-96df-93e52b747428/20b388ad-01dd-4aa6-8ed7-7c845856acdf] 2026-01-01 00:03:25.607467 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=4d26f62a-1f64-4a38-a14e-d41365b419f2/915f467d-f7f4-452d-8987-30de089be54c] 2026-01-01 00:03:25.628099 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=b2747024-86e2-4d7d-96df-93e52b747428/4d1e5713-0224-44c3-b7e5-3534e8d42515] 2026-01-01 00:03:31.722525 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=4d26f62a-1f64-4a38-a14e-d41365b419f2/c9c8c13d-3654-44d4-a582-9918730b81ab] 2026-01-01 00:03:31.760100 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=8c657440-4a96-4978-84df-b54001bfe278/76bcb63e-af93-4dd0-8322-d7e595f79a86] 2026-01-01 00:03:31.813279 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=b2747024-86e2-4d7d-96df-93e52b747428/39cbb32c-70e6-49f5-af10-21eee2e163cb] 2026-01-01 00:03:31.846177 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=8c657440-4a96-4978-84df-b54001bfe278/de0fd659-da77-4142-b79c-576ab421e122] 2026-01-01 00:03:32.122119 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-01 00:03:42.123146 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-01 00:03:42.468341 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=4fe684fd-6a4a-41bc-8d50-272f16731a40] 2026-01-01 00:03:42.482761 | orchestrator | 2026-01-01 00:03:42.482849 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-01 00:03:42.482863 | orchestrator | 2026-01-01 00:03:42.482872 | orchestrator | Outputs: 2026-01-01 00:03:42.482881 | orchestrator | 2026-01-01 00:03:42.482900 | orchestrator | manager_address = 2026-01-01 00:03:42.482909 | orchestrator | private_key = 2026-01-01 00:03:42.897063 | orchestrator | ok: Runtime: 0:01:10.851591 2026-01-01 00:03:42.936566 | 2026-01-01 00:03:42.936751 | TASK [Create infrastructure (stable)] 2026-01-01 00:03:43.495832 | orchestrator | skipping: Conditional result was False 2026-01-01 00:03:43.514132 | 2026-01-01 00:03:43.514310 | TASK [Fetch manager address] 2026-01-01 00:03:43.985678 | orchestrator | ok 2026-01-01 00:03:43.993685 | 2026-01-01 00:03:43.993805 | TASK [Set manager_host address] 2026-01-01 00:03:44.066915 | orchestrator | ok 2026-01-01 00:03:44.079134 | 2026-01-01 00:03:44.079286 | LOOP [Update ansible collections] 2026-01-01 00:03:45.026141 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-01 00:03:45.026597 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-01 00:03:45.026664 | orchestrator | Starting galaxy collection install process 2026-01-01 00:03:45.026708 | orchestrator | Process install dependency map 2026-01-01 00:03:45.026747 | orchestrator | Starting collection install process 2026-01-01 00:03:45.026784 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-01-01 00:03:45.026828 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-01-01 00:03:45.026913 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-01 00:03:45.027023 | orchestrator | ok: Item: commons Runtime: 0:00:00.588872 2026-01-01 00:03:46.041170 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-01 00:03:46.041298 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-01 00:03:46.041329 | orchestrator | Starting galaxy collection install process 2026-01-01 00:03:46.041352 | orchestrator | Process install dependency map 2026-01-01 00:03:46.041374 | orchestrator | Starting collection install process 2026-01-01 00:03:46.041412 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-01-01 00:03:46.041433 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-01-01 00:03:46.041454 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-01 00:03:46.041490 | orchestrator | ok: Item: services Runtime: 0:00:00.743120 2026-01-01 00:03:46.061183 | 2026-01-01 00:03:46.061348 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-01 00:03:56.625189 | orchestrator | ok 2026-01-01 00:03:56.635768 | 2026-01-01 00:03:56.635903 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-01 00:04:56.688279 | orchestrator | ok 2026-01-01 00:04:56.699990 | 2026-01-01 00:04:56.700311 | TASK [Fetch manager ssh hostkey] 2026-01-01 00:04:58.288563 | orchestrator | Output suppressed because no_log was given 2026-01-01 00:04:58.296726 | 2026-01-01 00:04:58.296864 | TASK [Get ssh keypair from terraform environment] 2026-01-01 00:04:58.836927 | orchestrator | ok: Runtime: 0:00:00.010705 2026-01-01 00:04:58.851343 | 2026-01-01 00:04:58.851493 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-01 00:04:58.893982 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-01 00:04:58.901442 | 2026-01-01 00:04:58.901555 | TASK [Run manager part 0] 2026-01-01 00:04:59.777484 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-01 00:04:59.826193 | orchestrator | 2026-01-01 00:04:59.826251 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-01 00:04:59.826260 | orchestrator | 2026-01-01 00:04:59.826275 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-01 00:05:01.806560 | orchestrator | ok: [testbed-manager] 2026-01-01 00:05:01.806634 | orchestrator | 2026-01-01 00:05:01.806666 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-01 00:05:01.806680 | orchestrator | 2026-01-01 00:05:01.806693 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:05:03.823906 | orchestrator | ok: [testbed-manager] 2026-01-01 00:05:03.823977 | orchestrator | 2026-01-01 00:05:03.823986 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-01 00:05:04.594235 | orchestrator | ok: [testbed-manager] 2026-01-01 00:05:04.594309 | orchestrator | 2026-01-01 00:05:04.594321 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-01 00:05:04.642094 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:04.642152 | orchestrator | 2026-01-01 00:05:04.642162 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-01 00:05:04.674877 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:04.674943 | orchestrator | 2026-01-01 00:05:04.674954 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-01 00:05:04.720367 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:04.720448 | orchestrator | 2026-01-01 00:05:04.720457 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-01 00:05:04.762922 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:04.762991 | orchestrator | 2026-01-01 00:05:04.763002 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-01 00:05:04.798379 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:04.798469 | orchestrator | 2026-01-01 00:05:04.798479 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-01 00:05:04.850742 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:04.850823 | orchestrator | 2026-01-01 00:05:04.850837 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-01 00:05:04.893035 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:04.893088 | orchestrator | 2026-01-01 00:05:04.893097 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-01 00:05:05.699661 | orchestrator | changed: [testbed-manager] 2026-01-01 00:05:05.699746 | orchestrator | 2026-01-01 00:05:05.699761 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-01 00:07:56.796135 | orchestrator | changed: [testbed-manager] 2026-01-01 00:07:56.796211 | orchestrator | 2026-01-01 00:07:56.796225 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-01 00:09:38.258239 | orchestrator | changed: [testbed-manager] 2026-01-01 00:09:38.258334 | orchestrator | 2026-01-01 00:09:38.258361 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-01 00:10:06.814001 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:06.814062 | orchestrator | 2026-01-01 00:10:06.814072 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-01 00:10:16.752360 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:16.752457 | orchestrator | 2026-01-01 00:10:16.752476 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-01 00:10:16.797687 | orchestrator | ok: [testbed-manager] 2026-01-01 00:10:16.797761 | orchestrator | 2026-01-01 00:10:16.797776 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-01 00:10:17.622062 | orchestrator | ok: [testbed-manager] 2026-01-01 00:10:17.622143 | orchestrator | 2026-01-01 00:10:17.622159 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-01 00:10:18.408426 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:18.408548 | orchestrator | 2026-01-01 00:10:18.408569 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-01 00:10:25.115181 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:25.115396 | orchestrator | 2026-01-01 00:10:25.115442 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-01 00:10:31.566600 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:31.566691 | orchestrator | 2026-01-01 00:10:31.566711 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-01 00:10:34.386828 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:34.386931 | orchestrator | 2026-01-01 00:10:34.386948 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-01 00:10:36.247134 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:36.247217 | orchestrator | 2026-01-01 00:10:36.247235 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-01 00:10:37.405858 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-01 00:10:37.405954 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-01 00:10:37.405970 | orchestrator | 2026-01-01 00:10:37.405984 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-01 00:10:37.466481 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-01 00:10:37.466544 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-01 00:10:37.466553 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-01 00:10:37.466561 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-01 00:10:42.532752 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-01 00:10:42.532837 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-01 00:10:42.532849 | orchestrator | 2026-01-01 00:10:42.532859 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-01 00:10:43.155230 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:43.155276 | orchestrator | 2026-01-01 00:10:43.155307 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-01 00:15:03.457172 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-01 00:15:03.457280 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-01 00:15:03.457293 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-01 00:15:03.457300 | orchestrator | 2026-01-01 00:15:03.457309 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-01 00:15:05.999761 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-01 00:15:05.999893 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-01 00:15:05.999910 | orchestrator | 2026-01-01 00:15:05.999922 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-01 00:15:05.999934 | orchestrator | 2026-01-01 00:15:05.999946 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:15:07.470209 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:07.470295 | orchestrator | 2026-01-01 00:15:07.470312 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-01 00:15:07.516222 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:07.516266 | orchestrator | 2026-01-01 00:15:07.516273 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-01 00:15:07.566006 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:07.566052 | orchestrator | 2026-01-01 00:15:07.566058 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-01 00:15:08.352067 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:08.352193 | orchestrator | 2026-01-01 00:15:08.352220 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-01 00:15:09.070719 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:09.070756 | orchestrator | 2026-01-01 00:15:09.070763 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-01 00:15:10.441571 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-01 00:15:10.441603 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-01 00:15:10.441609 | orchestrator | 2026-01-01 00:15:10.441620 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-01 00:15:11.809182 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:11.809225 | orchestrator | 2026-01-01 00:15:11.809231 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-01 00:15:13.491706 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:15:13.491763 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-01 00:15:13.491771 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:15:13.491778 | orchestrator | 2026-01-01 00:15:13.491786 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-01 00:15:13.534490 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:13.534530 | orchestrator | 2026-01-01 00:15:13.534537 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-01 00:15:13.595137 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:13.595179 | orchestrator | 2026-01-01 00:15:13.595186 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-01 00:15:14.139022 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:14.139848 | orchestrator | 2026-01-01 00:15:14.139860 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-01 00:15:14.209800 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:14.209843 | orchestrator | 2026-01-01 00:15:14.209851 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-01 00:15:15.024107 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:15:15.024144 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:15.024152 | orchestrator | 2026-01-01 00:15:15.024159 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-01 00:15:15.058292 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:15.058331 | orchestrator | 2026-01-01 00:15:15.058340 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-01 00:15:15.084466 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:15.084497 | orchestrator | 2026-01-01 00:15:15.084505 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-01 00:15:15.116788 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:15.116824 | orchestrator | 2026-01-01 00:15:15.116831 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-01 00:15:15.192682 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:15.192723 | orchestrator | 2026-01-01 00:15:15.192731 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-01 00:15:15.886189 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:15.886292 | orchestrator | 2026-01-01 00:15:15.886308 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-01 00:15:15.886321 | orchestrator | 2026-01-01 00:15:15.886339 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:15:17.319565 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:17.319673 | orchestrator | 2026-01-01 00:15:17.319698 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-01 00:15:18.318824 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:18.318926 | orchestrator | 2026-01-01 00:15:18.318944 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:15:18.318958 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-01 00:15:18.318971 | orchestrator | 2026-01-01 00:15:18.804951 | orchestrator | ok: Runtime: 0:10:19.227512 2026-01-01 00:15:18.820163 | 2026-01-01 00:15:18.820318 | TASK [Point out that the log in on the manager is now possible] 2026-01-01 00:15:18.855433 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-01 00:15:18.864721 | 2026-01-01 00:15:18.864840 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-01 00:15:18.906175 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-01 00:15:18.913270 | 2026-01-01 00:15:18.913382 | TASK [Run manager part 1 + 2] 2026-01-01 00:15:20.076961 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-01 00:15:20.139688 | orchestrator | 2026-01-01 00:15:20.139752 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-01 00:15:20.139759 | orchestrator | 2026-01-01 00:15:20.139775 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:15:23.202005 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:23.202167 | orchestrator | 2026-01-01 00:15:23.202231 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-01 00:15:23.247027 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:23.247153 | orchestrator | 2026-01-01 00:15:23.247177 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-01 00:15:23.303938 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:23.304033 | orchestrator | 2026-01-01 00:15:23.304047 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-01 00:15:23.352812 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:23.352892 | orchestrator | 2026-01-01 00:15:23.352904 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-01 00:15:23.430417 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:23.430517 | orchestrator | 2026-01-01 00:15:23.430534 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-01 00:15:23.507547 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:23.507611 | orchestrator | 2026-01-01 00:15:23.507619 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-01 00:15:23.566113 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-01 00:15:23.566211 | orchestrator | 2026-01-01 00:15:23.566225 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-01 00:15:24.303989 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:24.304794 | orchestrator | 2026-01-01 00:15:24.304858 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-01 00:15:24.363797 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:24.363891 | orchestrator | 2026-01-01 00:15:24.363902 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-01 00:15:25.946966 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:25.947085 | orchestrator | 2026-01-01 00:15:25.947104 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-01 00:15:26.577585 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:26.577696 | orchestrator | 2026-01-01 00:15:26.577713 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-01 00:15:27.801038 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:27.801136 | orchestrator | 2026-01-01 00:15:27.801155 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-01 00:15:43.951603 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:43.951654 | orchestrator | 2026-01-01 00:15:43.951661 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-01 00:15:44.661823 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:44.661941 | orchestrator | 2026-01-01 00:15:44.661962 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-01 00:15:44.721696 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:44.721795 | orchestrator | 2026-01-01 00:15:44.721813 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-01 00:15:45.705222 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:45.705276 | orchestrator | 2026-01-01 00:15:45.705286 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-01 00:15:46.687169 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:46.687308 | orchestrator | 2026-01-01 00:15:46.687340 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-01 00:15:47.254181 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:47.254263 | orchestrator | 2026-01-01 00:15:47.254273 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-01 00:15:47.287502 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-01 00:15:47.287617 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-01 00:15:47.287630 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-01 00:15:47.287641 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-01 00:15:49.546990 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:49.547082 | orchestrator | 2026-01-01 00:15:49.547096 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-01 00:15:59.228289 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-01 00:15:59.228406 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-01 00:15:59.228425 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-01 00:15:59.228438 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-01 00:15:59.228458 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-01 00:15:59.228470 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-01 00:15:59.228481 | orchestrator | 2026-01-01 00:15:59.228494 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-01 00:16:00.321722 | orchestrator | changed: [testbed-manager] 2026-01-01 00:16:00.321774 | orchestrator | 2026-01-01 00:16:00.321783 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-01 00:16:00.363903 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:16:00.363955 | orchestrator | 2026-01-01 00:16:00.363963 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-01 00:16:03.397683 | orchestrator | changed: [testbed-manager] 2026-01-01 00:16:03.397740 | orchestrator | 2026-01-01 00:16:03.397751 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-01 00:16:03.441567 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:16:03.441619 | orchestrator | 2026-01-01 00:16:03.441628 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-01 00:17:48.622427 | orchestrator | changed: [testbed-manager] 2026-01-01 00:17:48.622485 | orchestrator | 2026-01-01 00:17:48.622493 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-01 00:17:49.883089 | orchestrator | ok: [testbed-manager] 2026-01-01 00:17:49.883140 | orchestrator | 2026-01-01 00:17:49.883147 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:17:49.883154 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-01 00:17:49.883159 | orchestrator | 2026-01-01 00:17:50.054987 | orchestrator | ok: Runtime: 0:02:30.778320 2026-01-01 00:17:50.071373 | 2026-01-01 00:17:50.071546 | TASK [Reboot manager] 2026-01-01 00:17:51.610377 | orchestrator | ok: Runtime: 0:00:00.935301 2026-01-01 00:17:51.619581 | 2026-01-01 00:17:51.619701 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-01 00:18:06.094207 | orchestrator | ok 2026-01-01 00:18:06.107859 | 2026-01-01 00:18:06.108064 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-01 00:19:06.162610 | orchestrator | ok 2026-01-01 00:19:06.173458 | 2026-01-01 00:19:06.173619 | TASK [Deploy manager + bootstrap nodes] 2026-01-01 00:19:08.959469 | orchestrator | 2026-01-01 00:19:08.959706 | orchestrator | # DEPLOY MANAGER 2026-01-01 00:19:08.959730 | orchestrator | 2026-01-01 00:19:08.959744 | orchestrator | + set -e 2026-01-01 00:19:08.959757 | orchestrator | + echo 2026-01-01 00:19:08.959771 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-01 00:19:08.959788 | orchestrator | + echo 2026-01-01 00:19:08.959838 | orchestrator | + cat /opt/manager-vars.sh 2026-01-01 00:19:08.964023 | orchestrator | export NUMBER_OF_NODES=6 2026-01-01 00:19:08.964047 | orchestrator | 2026-01-01 00:19:08.964058 | orchestrator | export CEPH_VERSION=reef 2026-01-01 00:19:08.964071 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-01 00:19:08.964083 | orchestrator | export MANAGER_VERSION=latest 2026-01-01 00:19:08.964105 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-01-01 00:19:08.964115 | orchestrator | 2026-01-01 00:19:08.964132 | orchestrator | export ARA=false 2026-01-01 00:19:08.964143 | orchestrator | export DEPLOY_MODE=manager 2026-01-01 00:19:08.964160 | orchestrator | export TEMPEST=true 2026-01-01 00:19:08.964171 | orchestrator | export IS_ZUUL=true 2026-01-01 00:19:08.964181 | orchestrator | 2026-01-01 00:19:08.964198 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-01-01 00:19:08.964210 | orchestrator | export EXTERNAL_API=false 2026-01-01 00:19:08.964220 | orchestrator | 2026-01-01 00:19:08.964230 | orchestrator | export IMAGE_USER=ubuntu 2026-01-01 00:19:08.964245 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-01 00:19:08.964255 | orchestrator | 2026-01-01 00:19:08.964266 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-01 00:19:08.964511 | orchestrator | 2026-01-01 00:19:08.964528 | orchestrator | + echo 2026-01-01 00:19:08.964540 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-01 00:19:08.965532 | orchestrator | ++ export INTERACTIVE=false 2026-01-01 00:19:08.965549 | orchestrator | ++ INTERACTIVE=false 2026-01-01 00:19:08.965562 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-01 00:19:08.965575 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-01 00:19:08.965870 | orchestrator | + source /opt/manager-vars.sh 2026-01-01 00:19:08.965885 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-01 00:19:08.965895 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-01 00:19:08.965909 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-01 00:19:08.965919 | orchestrator | ++ CEPH_VERSION=reef 2026-01-01 00:19:08.965930 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-01 00:19:08.965964 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-01 00:19:08.966107 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-01 00:19:08.966121 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-01 00:19:08.966132 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-01 00:19:08.966149 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-01 00:19:08.966160 | orchestrator | ++ export ARA=false 2026-01-01 00:19:08.966170 | orchestrator | ++ ARA=false 2026-01-01 00:19:08.966180 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-01 00:19:08.966190 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-01 00:19:08.966200 | orchestrator | ++ export TEMPEST=true 2026-01-01 00:19:08.966210 | orchestrator | ++ TEMPEST=true 2026-01-01 00:19:08.966224 | orchestrator | ++ export IS_ZUUL=true 2026-01-01 00:19:08.966234 | orchestrator | ++ IS_ZUUL=true 2026-01-01 00:19:08.966244 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-01-01 00:19:08.966255 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-01-01 00:19:08.966265 | orchestrator | ++ export EXTERNAL_API=false 2026-01-01 00:19:08.966275 | orchestrator | ++ EXTERNAL_API=false 2026-01-01 00:19:08.966284 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-01 00:19:08.966294 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-01 00:19:08.966304 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-01 00:19:08.966314 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-01 00:19:08.966324 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-01 00:19:08.966334 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-01 00:19:08.966344 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-01 00:19:09.026421 | orchestrator | + docker version 2026-01-01 00:19:09.333560 | orchestrator | Client: Docker Engine - Community 2026-01-01 00:19:09.333687 | orchestrator | Version: 27.5.1 2026-01-01 00:19:09.333702 | orchestrator | API version: 1.47 2026-01-01 00:19:09.333715 | orchestrator | Go version: go1.22.11 2026-01-01 00:19:09.333726 | orchestrator | Git commit: 9f9e405 2026-01-01 00:19:09.333736 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-01 00:19:09.333748 | orchestrator | OS/Arch: linux/amd64 2026-01-01 00:19:09.333758 | orchestrator | Context: default 2026-01-01 00:19:09.333768 | orchestrator | 2026-01-01 00:19:09.333778 | orchestrator | Server: Docker Engine - Community 2026-01-01 00:19:09.333789 | orchestrator | Engine: 2026-01-01 00:19:09.333799 | orchestrator | Version: 27.5.1 2026-01-01 00:19:09.333810 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-01 00:19:09.333859 | orchestrator | Go version: go1.22.11 2026-01-01 00:19:09.333869 | orchestrator | Git commit: 4c9b3b0 2026-01-01 00:19:09.333880 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-01 00:19:09.333889 | orchestrator | OS/Arch: linux/amd64 2026-01-01 00:19:09.333899 | orchestrator | Experimental: false 2026-01-01 00:19:09.333909 | orchestrator | containerd: 2026-01-01 00:19:09.333919 | orchestrator | Version: v2.2.1 2026-01-01 00:19:09.333930 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-01 00:19:09.333980 | orchestrator | runc: 2026-01-01 00:19:09.333991 | orchestrator | Version: 1.3.4 2026-01-01 00:19:09.334001 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-01 00:19:09.334011 | orchestrator | docker-init: 2026-01-01 00:19:09.334084 | orchestrator | Version: 0.19.0 2026-01-01 00:19:09.334096 | orchestrator | GitCommit: de40ad0 2026-01-01 00:19:09.336550 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-01 00:19:09.345886 | orchestrator | + set -e 2026-01-01 00:19:09.345916 | orchestrator | + source /opt/manager-vars.sh 2026-01-01 00:19:09.345929 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-01 00:19:09.345962 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-01 00:19:09.345974 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-01 00:19:09.345985 | orchestrator | ++ CEPH_VERSION=reef 2026-01-01 00:19:09.345997 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-01 00:19:09.346009 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-01 00:19:09.346061 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-01 00:19:09.346073 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-01 00:19:09.346084 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-01 00:19:09.346095 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-01 00:19:09.346106 | orchestrator | ++ export ARA=false 2026-01-01 00:19:09.346117 | orchestrator | ++ ARA=false 2026-01-01 00:19:09.346128 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-01 00:19:09.346140 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-01 00:19:09.346157 | orchestrator | ++ export TEMPEST=true 2026-01-01 00:19:09.346169 | orchestrator | ++ TEMPEST=true 2026-01-01 00:19:09.346181 | orchestrator | ++ export IS_ZUUL=true 2026-01-01 00:19:09.346192 | orchestrator | ++ IS_ZUUL=true 2026-01-01 00:19:09.346203 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-01-01 00:19:09.346214 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-01-01 00:19:09.346225 | orchestrator | ++ export EXTERNAL_API=false 2026-01-01 00:19:09.346236 | orchestrator | ++ EXTERNAL_API=false 2026-01-01 00:19:09.346247 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-01 00:19:09.346258 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-01 00:19:09.346269 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-01 00:19:09.346280 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-01 00:19:09.346292 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-01 00:19:09.346302 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-01 00:19:09.346313 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-01 00:19:09.346324 | orchestrator | ++ export INTERACTIVE=false 2026-01-01 00:19:09.346335 | orchestrator | ++ INTERACTIVE=false 2026-01-01 00:19:09.346346 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-01 00:19:09.346362 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-01 00:19:09.346577 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-01 00:19:09.346594 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-01 00:19:09.346606 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-01 00:19:09.350420 | orchestrator | + set -e 2026-01-01 00:19:09.350441 | orchestrator | + VERSION=reef 2026-01-01 00:19:09.351280 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-01 00:19:09.356963 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-01 00:19:09.356985 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-01 00:19:09.362380 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-01-01 00:19:09.370064 | orchestrator | + set -e 2026-01-01 00:19:09.370096 | orchestrator | + VERSION=2025.1 2026-01-01 00:19:09.370593 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-01 00:19:09.372812 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-01 00:19:09.372836 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-01-01 00:19:09.378461 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-01 00:19:09.379258 | orchestrator | ++ semver latest 7.0.0 2026-01-01 00:19:09.442356 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-01 00:19:09.442459 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-01 00:19:09.442479 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-01 00:19:09.443242 | orchestrator | ++ semver latest 10.0.0-0 2026-01-01 00:19:09.505512 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-01 00:19:09.506503 | orchestrator | ++ semver 2025.1 2025.1 2026-01-01 00:19:09.591298 | orchestrator | + [[ 0 -ge 0 ]] 2026-01-01 00:19:09.591415 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-01 00:19:09.599287 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-01 00:19:09.602848 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-01 00:19:09.696184 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-01 00:19:09.697499 | orchestrator | + source /opt/venv/bin/activate 2026-01-01 00:19:09.698646 | orchestrator | ++ deactivate nondestructive 2026-01-01 00:19:09.698685 | orchestrator | ++ '[' -n '' ']' 2026-01-01 00:19:09.698695 | orchestrator | ++ '[' -n '' ']' 2026-01-01 00:19:09.698702 | orchestrator | ++ hash -r 2026-01-01 00:19:09.698739 | orchestrator | ++ '[' -n '' ']' 2026-01-01 00:19:09.698820 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-01 00:19:09.698831 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-01 00:19:09.699049 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-01 00:19:09.699069 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-01 00:19:09.699078 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-01 00:19:09.699106 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-01 00:19:09.699114 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-01 00:19:09.699122 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-01 00:19:09.699148 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-01 00:19:09.699155 | orchestrator | ++ export PATH 2026-01-01 00:19:09.699164 | orchestrator | ++ '[' -n '' ']' 2026-01-01 00:19:09.699377 | orchestrator | ++ '[' -z '' ']' 2026-01-01 00:19:09.699395 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-01 00:19:09.699402 | orchestrator | ++ PS1='(venv) ' 2026-01-01 00:19:09.699410 | orchestrator | ++ export PS1 2026-01-01 00:19:09.699421 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-01 00:19:09.699428 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-01 00:19:09.699434 | orchestrator | ++ hash -r 2026-01-01 00:19:09.699606 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-01 00:19:11.010424 | orchestrator | 2026-01-01 00:19:11.010546 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-01 00:19:11.010566 | orchestrator | 2026-01-01 00:19:11.010579 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-01 00:19:11.541448 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:11.541578 | orchestrator | 2026-01-01 00:19:11.541596 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-01 00:19:12.515644 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:12.515793 | orchestrator | 2026-01-01 00:19:12.515815 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-01 00:19:12.515828 | orchestrator | 2026-01-01 00:19:12.515840 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:19:15.081437 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:15.081586 | orchestrator | 2026-01-01 00:19:15.081606 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-01 00:19:15.136917 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:15.137095 | orchestrator | 2026-01-01 00:19:15.137113 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-01 00:19:15.625736 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:15.625865 | orchestrator | 2026-01-01 00:19:15.625883 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-01 00:19:15.660763 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:15.660863 | orchestrator | 2026-01-01 00:19:15.660878 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-01 00:19:16.032672 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:16.032823 | orchestrator | 2026-01-01 00:19:16.033838 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-01 00:19:16.100014 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:16.100141 | orchestrator | 2026-01-01 00:19:16.100161 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-01 00:19:16.444636 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:16.444760 | orchestrator | 2026-01-01 00:19:16.444778 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-01 00:19:16.568761 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:16.568880 | orchestrator | 2026-01-01 00:19:16.568895 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-01 00:19:16.568909 | orchestrator | 2026-01-01 00:19:16.568921 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:19:18.466469 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:18.466604 | orchestrator | 2026-01-01 00:19:18.466623 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-01 00:19:18.571007 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-01 00:19:18.571154 | orchestrator | 2026-01-01 00:19:18.571182 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-01 00:19:18.645765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-01 00:19:18.645881 | orchestrator | 2026-01-01 00:19:18.645896 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-01 00:19:19.833579 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-01 00:19:19.833705 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-01 00:19:19.833721 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-01 00:19:19.833734 | orchestrator | 2026-01-01 00:19:19.833747 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-01 00:19:21.638478 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-01 00:19:21.639523 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-01 00:19:21.639557 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-01 00:19:21.639571 | orchestrator | 2026-01-01 00:19:21.639584 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-01 00:19:22.232621 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:19:22.232742 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:22.232759 | orchestrator | 2026-01-01 00:19:22.232772 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-01 00:19:22.860082 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:19:22.860238 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:22.860268 | orchestrator | 2026-01-01 00:19:22.860288 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-01 00:19:22.902404 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:22.902507 | orchestrator | 2026-01-01 00:19:22.902518 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-01 00:19:23.250314 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:23.250417 | orchestrator | 2026-01-01 00:19:23.250429 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-01 00:19:23.328507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-01 00:19:23.328623 | orchestrator | 2026-01-01 00:19:23.328637 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-01 00:19:24.423019 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:24.423150 | orchestrator | 2026-01-01 00:19:24.423168 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-01 00:19:25.216856 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:25.217032 | orchestrator | 2026-01-01 00:19:25.217051 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-01 00:19:35.855169 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:35.855285 | orchestrator | 2026-01-01 00:19:35.855304 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-01 00:19:35.915462 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:35.915568 | orchestrator | 2026-01-01 00:19:35.915581 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-01 00:19:35.915591 | orchestrator | 2026-01-01 00:19:35.915599 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:19:37.805432 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:37.805638 | orchestrator | 2026-01-01 00:19:37.805665 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-01 00:19:37.935687 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-01 00:19:37.935789 | orchestrator | 2026-01-01 00:19:37.935804 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-01 00:19:37.997344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:19:37.997438 | orchestrator | 2026-01-01 00:19:37.997455 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-01 00:19:40.748321 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:40.748426 | orchestrator | 2026-01-01 00:19:40.748436 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-01 00:19:40.805182 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:40.805273 | orchestrator | 2026-01-01 00:19:40.805281 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-01 00:19:40.944801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-01 00:19:40.944910 | orchestrator | 2026-01-01 00:19:40.944920 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-01 00:19:43.985294 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-01 00:19:43.985423 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-01 00:19:43.985439 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-01 00:19:43.985452 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-01 00:19:43.985464 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-01 00:19:43.985477 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-01 00:19:43.985489 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-01 00:19:43.985500 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-01 00:19:43.985511 | orchestrator | 2026-01-01 00:19:43.985524 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-01 00:19:44.674282 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:44.674407 | orchestrator | 2026-01-01 00:19:44.674425 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-01 00:19:45.331448 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:45.331576 | orchestrator | 2026-01-01 00:19:45.331596 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-01 00:19:45.416088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-01 00:19:45.416222 | orchestrator | 2026-01-01 00:19:45.416239 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-01 00:19:46.741218 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-01 00:19:46.741359 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-01 00:19:46.741379 | orchestrator | 2026-01-01 00:19:46.741393 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-01 00:19:47.406415 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:47.406546 | orchestrator | 2026-01-01 00:19:47.406563 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-01 00:19:47.468650 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:47.468800 | orchestrator | 2026-01-01 00:19:47.468819 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-01 00:19:47.567034 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-01 00:19:47.567195 | orchestrator | 2026-01-01 00:19:47.567211 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-01 00:19:48.220531 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:48.220653 | orchestrator | 2026-01-01 00:19:48.220670 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-01 00:19:48.285420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-01 00:19:48.285512 | orchestrator | 2026-01-01 00:19:48.285530 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-01 00:19:49.727575 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:19:49.727707 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:19:49.727723 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:49.727737 | orchestrator | 2026-01-01 00:19:49.727750 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-01 00:19:50.400505 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:50.400632 | orchestrator | 2026-01-01 00:19:50.400650 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-01 00:19:50.455722 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:50.455830 | orchestrator | 2026-01-01 00:19:50.455872 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-01 00:19:50.551455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-01 00:19:50.551567 | orchestrator | 2026-01-01 00:19:50.551581 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-01 00:19:51.092844 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:51.093817 | orchestrator | 2026-01-01 00:19:51.093852 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-01 00:19:51.518809 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:51.518982 | orchestrator | 2026-01-01 00:19:51.519002 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-01 00:19:52.832357 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-01 00:19:52.832483 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-01 00:19:52.832499 | orchestrator | 2026-01-01 00:19:52.832513 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-01 00:19:53.500515 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:53.500634 | orchestrator | 2026-01-01 00:19:53.500650 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-01 00:19:53.937610 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:53.937733 | orchestrator | 2026-01-01 00:19:53.937751 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-01 00:19:54.316497 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:54.316632 | orchestrator | 2026-01-01 00:19:54.316661 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-01 00:19:54.363948 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:54.364027 | orchestrator | 2026-01-01 00:19:54.364036 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-01 00:19:54.450231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-01 00:19:54.450361 | orchestrator | 2026-01-01 00:19:54.450386 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-01 00:19:54.497303 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:54.497399 | orchestrator | 2026-01-01 00:19:54.497413 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-01 00:19:56.631031 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-01 00:19:56.631151 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-01 00:19:56.631169 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-01 00:19:56.631182 | orchestrator | 2026-01-01 00:19:56.631198 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-01 00:19:57.376877 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:57.377009 | orchestrator | 2026-01-01 00:19:57.377020 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-01 00:19:58.123438 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:58.123551 | orchestrator | 2026-01-01 00:19:58.123575 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-01 00:19:58.860044 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:58.860173 | orchestrator | 2026-01-01 00:19:58.860192 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-01 00:19:58.944562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-01 00:19:58.944688 | orchestrator | 2026-01-01 00:19:58.944706 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-01 00:19:58.989829 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:58.990097 | orchestrator | 2026-01-01 00:19:58.990120 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-01 00:19:59.754560 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-01 00:19:59.754683 | orchestrator | 2026-01-01 00:19:59.754698 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-01 00:19:59.843464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-01 00:19:59.843585 | orchestrator | 2026-01-01 00:19:59.843601 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-01 00:20:00.666260 | orchestrator | changed: [testbed-manager] 2026-01-01 00:20:00.666390 | orchestrator | 2026-01-01 00:20:00.666407 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-01 00:20:01.281117 | orchestrator | ok: [testbed-manager] 2026-01-01 00:20:01.281225 | orchestrator | 2026-01-01 00:20:01.281236 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-01 00:20:01.335857 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:20:01.336007 | orchestrator | 2026-01-01 00:20:01.336022 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-01 00:20:01.399842 | orchestrator | ok: [testbed-manager] 2026-01-01 00:20:01.399947 | orchestrator | 2026-01-01 00:20:01.399955 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-01 00:20:02.308243 | orchestrator | changed: [testbed-manager] 2026-01-01 00:20:02.308384 | orchestrator | 2026-01-01 00:20:02.308401 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-01 00:21:12.490221 | orchestrator | changed: [testbed-manager] 2026-01-01 00:21:12.490343 | orchestrator | 2026-01-01 00:21:12.490357 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-01 00:21:13.540203 | orchestrator | ok: [testbed-manager] 2026-01-01 00:21:13.540356 | orchestrator | 2026-01-01 00:21:13.540405 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-01 00:21:13.593675 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:21:13.593792 | orchestrator | 2026-01-01 00:21:13.593808 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-01 00:21:20.140440 | orchestrator | changed: [testbed-manager] 2026-01-01 00:21:20.140559 | orchestrator | 2026-01-01 00:21:20.140578 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-01 00:21:20.195220 | orchestrator | ok: [testbed-manager] 2026-01-01 00:21:20.195312 | orchestrator | 2026-01-01 00:21:20.195327 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-01 00:21:20.195338 | orchestrator | 2026-01-01 00:21:20.195349 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-01 00:21:20.246798 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:21:20.246957 | orchestrator | 2026-01-01 00:21:20.246979 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-01 00:22:20.294982 | orchestrator | Pausing for 60 seconds 2026-01-01 00:22:20.295130 | orchestrator | changed: [testbed-manager] 2026-01-01 00:22:20.295149 | orchestrator | 2026-01-01 00:22:20.295163 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-01 00:22:23.934250 | orchestrator | changed: [testbed-manager] 2026-01-01 00:22:23.934376 | orchestrator | 2026-01-01 00:22:23.934393 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-01 00:23:26.078841 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-01 00:23:26.079007 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-01 00:23:26.079024 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-01 00:23:26.079037 | orchestrator | changed: [testbed-manager] 2026-01-01 00:23:26.079051 | orchestrator | 2026-01-01 00:23:26.079063 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-01 00:23:37.851125 | orchestrator | changed: [testbed-manager] 2026-01-01 00:23:37.851266 | orchestrator | 2026-01-01 00:23:37.851286 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-01 00:23:37.947360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-01 00:23:37.947481 | orchestrator | 2026-01-01 00:23:37.947497 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-01 00:23:37.947510 | orchestrator | 2026-01-01 00:23:37.947522 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-01 00:23:38.014401 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:23:38.014546 | orchestrator | 2026-01-01 00:23:38.014563 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-01 00:23:38.099078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-01 00:23:38.099222 | orchestrator | 2026-01-01 00:23:38.099247 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-01 00:23:38.909426 | orchestrator | changed: [testbed-manager] 2026-01-01 00:23:38.909516 | orchestrator | 2026-01-01 00:23:38.909526 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-01 00:23:42.336787 | orchestrator | ok: [testbed-manager] 2026-01-01 00:23:42.336978 | orchestrator | 2026-01-01 00:23:42.336996 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-01 00:23:42.416269 | orchestrator | ok: [testbed-manager] => { 2026-01-01 00:23:42.416385 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-01 00:23:42.416401 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-01 00:23:42.416414 | orchestrator | "Checking running containers against expected versions...", 2026-01-01 00:23:42.416427 | orchestrator | "", 2026-01-01 00:23:42.416440 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-01 00:23:42.416452 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-01 00:23:42.416463 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.416474 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-01 00:23:42.416486 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.416497 | orchestrator | "", 2026-01-01 00:23:42.416508 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-01 00:23:42.416520 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-01 00:23:42.416531 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.416542 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-01 00:23:42.416553 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.416564 | orchestrator | "", 2026-01-01 00:23:42.416575 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-01 00:23:42.416586 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-01 00:23:42.416598 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.416609 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-01 00:23:42.416620 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.416631 | orchestrator | "", 2026-01-01 00:23:42.416642 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-01 00:23:42.416686 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-01 00:23:42.416698 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.416710 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-01 00:23:42.416720 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.416731 | orchestrator | "", 2026-01-01 00:23:42.416742 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-01 00:23:42.416753 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-01 00:23:42.416764 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.416775 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-01 00:23:42.416789 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.416802 | orchestrator | "", 2026-01-01 00:23:42.416814 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-01 00:23:42.416827 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.416864 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.416878 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.416891 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.416903 | orchestrator | "", 2026-01-01 00:23:42.416916 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-01 00:23:42.416929 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-01 00:23:42.416943 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.416965 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-01 00:23:42.416979 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.416992 | orchestrator | "", 2026-01-01 00:23:42.417004 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-01 00:23:42.417018 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-01 00:23:42.417037 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.417050 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-01 00:23:42.417064 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.417077 | orchestrator | "", 2026-01-01 00:23:42.417089 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-01 00:23:42.417103 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-01 00:23:42.417116 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.417129 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-01 00:23:42.417140 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.417151 | orchestrator | "", 2026-01-01 00:23:42.417162 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-01 00:23:42.417173 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-01 00:23:42.417184 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.417195 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-01 00:23:42.417206 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.417217 | orchestrator | "", 2026-01-01 00:23:42.417228 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-01 00:23:42.417239 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.417250 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.417261 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.417272 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.417282 | orchestrator | "", 2026-01-01 00:23:42.417293 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-01 00:23:42.417305 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.417316 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.417327 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.417338 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.417348 | orchestrator | "", 2026-01-01 00:23:42.417359 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-01 00:23:42.417370 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.417390 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.417401 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.417412 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.417422 | orchestrator | "", 2026-01-01 00:23:42.417434 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-01 00:23:42.417444 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.417455 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.417466 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.417477 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.417488 | orchestrator | "", 2026-01-01 00:23:42.417499 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-01 00:23:42.417531 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.417543 | orchestrator | " Enabled: true", 2026-01-01 00:23:42.417554 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:23:42.417565 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:23:42.417576 | orchestrator | "", 2026-01-01 00:23:42.417587 | orchestrator | "=== Summary ===", 2026-01-01 00:23:42.417598 | orchestrator | "Errors (version mismatches): 0", 2026-01-01 00:23:42.417609 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-01 00:23:42.417620 | orchestrator | "", 2026-01-01 00:23:42.417631 | orchestrator | "✅ All running containers match expected versions!" 2026-01-01 00:23:42.417642 | orchestrator | ] 2026-01-01 00:23:42.417654 | orchestrator | } 2026-01-01 00:23:42.417666 | orchestrator | 2026-01-01 00:23:42.417677 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-01 00:23:42.477500 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:23:42.477605 | orchestrator | 2026-01-01 00:23:42.477621 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:23:42.477635 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-01 00:23:42.477647 | orchestrator | 2026-01-01 00:23:42.586240 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-01 00:23:42.586358 | orchestrator | + deactivate 2026-01-01 00:23:42.586376 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-01 00:23:42.586390 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-01 00:23:42.586401 | orchestrator | + export PATH 2026-01-01 00:23:42.586413 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-01 00:23:42.586426 | orchestrator | + '[' -n '' ']' 2026-01-01 00:23:42.586437 | orchestrator | + hash -r 2026-01-01 00:23:42.586448 | orchestrator | + '[' -n '' ']' 2026-01-01 00:23:42.586459 | orchestrator | + unset VIRTUAL_ENV 2026-01-01 00:23:42.586470 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-01 00:23:42.586481 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-01 00:23:42.586493 | orchestrator | + unset -f deactivate 2026-01-01 00:23:42.586505 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-01 00:23:42.593904 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-01 00:23:42.593960 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-01 00:23:42.593974 | orchestrator | + local max_attempts=60 2026-01-01 00:23:42.593987 | orchestrator | + local name=ceph-ansible 2026-01-01 00:23:42.593999 | orchestrator | + local attempt_num=1 2026-01-01 00:23:42.595203 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:23:42.624078 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:23:42.624137 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-01 00:23:42.624151 | orchestrator | + local max_attempts=60 2026-01-01 00:23:42.624163 | orchestrator | + local name=kolla-ansible 2026-01-01 00:23:42.624175 | orchestrator | + local attempt_num=1 2026-01-01 00:23:42.625247 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-01 00:23:42.662916 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:23:42.663009 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-01 00:23:42.663022 | orchestrator | + local max_attempts=60 2026-01-01 00:23:42.663033 | orchestrator | + local name=osism-ansible 2026-01-01 00:23:42.663043 | orchestrator | + local attempt_num=1 2026-01-01 00:23:42.663406 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-01 00:23:42.699196 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:23:42.699283 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-01 00:23:42.699299 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-01 00:23:43.433706 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-01 00:23:43.625443 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-01 00:23:43.625605 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-01 00:23:43.625628 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-01 00:23:43.625645 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-01 00:23:43.625664 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-01 00:23:43.625682 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-01 00:23:43.625726 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-01 00:23:43.625738 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-01 00:23:43.625747 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-01 00:23:43.625756 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-01 00:23:43.625765 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-01 00:23:43.625774 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-01 00:23:43.625783 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-01 00:23:43.625791 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-01 00:23:43.625801 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-01 00:23:43.625911 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-01 00:23:43.632975 | orchestrator | ++ semver latest 7.0.0 2026-01-01 00:23:43.682422 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-01 00:23:43.682537 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-01 00:23:43.682555 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-01 00:23:43.685947 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-01 00:23:56.147132 | orchestrator | 2026-01-01 00:23:56 | INFO  | Task 46b4a0a7-aa9a-4a1e-97e8-82292c902fb8 (resolvconf) was prepared for execution. 2026-01-01 00:23:56.147255 | orchestrator | 2026-01-01 00:23:56 | INFO  | It takes a moment until task 46b4a0a7-aa9a-4a1e-97e8-82292c902fb8 (resolvconf) has been started and output is visible here. 2026-01-01 00:24:11.704578 | orchestrator | 2026-01-01 00:24:11.704702 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-01 00:24:11.704721 | orchestrator | 2026-01-01 00:24:11.704734 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:24:11.704745 | orchestrator | Thursday 01 January 2026 00:24:00 +0000 (0:00:00.147) 0:00:00.147 ****** 2026-01-01 00:24:11.704757 | orchestrator | ok: [testbed-manager] 2026-01-01 00:24:11.704772 | orchestrator | 2026-01-01 00:24:11.704783 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-01 00:24:11.704795 | orchestrator | Thursday 01 January 2026 00:24:05 +0000 (0:00:04.963) 0:00:05.111 ****** 2026-01-01 00:24:11.704806 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:24:11.704819 | orchestrator | 2026-01-01 00:24:11.704894 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-01 00:24:11.704908 | orchestrator | Thursday 01 January 2026 00:24:05 +0000 (0:00:00.066) 0:00:05.177 ****** 2026-01-01 00:24:11.704920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-01 00:24:11.704933 | orchestrator | 2026-01-01 00:24:11.704955 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-01 00:24:11.704967 | orchestrator | Thursday 01 January 2026 00:24:05 +0000 (0:00:00.080) 0:00:05.258 ****** 2026-01-01 00:24:11.704979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:24:11.704990 | orchestrator | 2026-01-01 00:24:11.705002 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-01 00:24:11.705014 | orchestrator | Thursday 01 January 2026 00:24:05 +0000 (0:00:00.085) 0:00:05.344 ****** 2026-01-01 00:24:11.705025 | orchestrator | ok: [testbed-manager] 2026-01-01 00:24:11.705036 | orchestrator | 2026-01-01 00:24:11.705047 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-01 00:24:11.705058 | orchestrator | Thursday 01 January 2026 00:24:06 +0000 (0:00:01.224) 0:00:06.568 ****** 2026-01-01 00:24:11.705069 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:24:11.705080 | orchestrator | 2026-01-01 00:24:11.705094 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-01 00:24:11.705107 | orchestrator | Thursday 01 January 2026 00:24:06 +0000 (0:00:00.076) 0:00:06.645 ****** 2026-01-01 00:24:11.705120 | orchestrator | ok: [testbed-manager] 2026-01-01 00:24:11.705133 | orchestrator | 2026-01-01 00:24:11.705145 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-01 00:24:11.705158 | orchestrator | Thursday 01 January 2026 00:24:07 +0000 (0:00:00.499) 0:00:07.144 ****** 2026-01-01 00:24:11.705172 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:24:11.705185 | orchestrator | 2026-01-01 00:24:11.705197 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-01 00:24:11.705212 | orchestrator | Thursday 01 January 2026 00:24:07 +0000 (0:00:00.077) 0:00:07.221 ****** 2026-01-01 00:24:11.705225 | orchestrator | changed: [testbed-manager] 2026-01-01 00:24:11.705238 | orchestrator | 2026-01-01 00:24:11.705252 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-01 00:24:11.705264 | orchestrator | Thursday 01 January 2026 00:24:08 +0000 (0:00:00.575) 0:00:07.797 ****** 2026-01-01 00:24:11.705277 | orchestrator | changed: [testbed-manager] 2026-01-01 00:24:11.705312 | orchestrator | 2026-01-01 00:24:11.705325 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-01 00:24:11.705339 | orchestrator | Thursday 01 January 2026 00:24:09 +0000 (0:00:01.169) 0:00:08.966 ****** 2026-01-01 00:24:11.705352 | orchestrator | ok: [testbed-manager] 2026-01-01 00:24:11.705365 | orchestrator | 2026-01-01 00:24:11.705378 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-01 00:24:11.705391 | orchestrator | Thursday 01 January 2026 00:24:10 +0000 (0:00:01.005) 0:00:09.971 ****** 2026-01-01 00:24:11.705405 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-01 00:24:11.705417 | orchestrator | 2026-01-01 00:24:11.705429 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-01 00:24:11.705440 | orchestrator | Thursday 01 January 2026 00:24:10 +0000 (0:00:00.101) 0:00:10.073 ****** 2026-01-01 00:24:11.705451 | orchestrator | changed: [testbed-manager] 2026-01-01 00:24:11.705461 | orchestrator | 2026-01-01 00:24:11.705472 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:24:11.705484 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-01 00:24:11.705495 | orchestrator | 2026-01-01 00:24:11.705506 | orchestrator | 2026-01-01 00:24:11.705517 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:24:11.705528 | orchestrator | Thursday 01 January 2026 00:24:11 +0000 (0:00:01.179) 0:00:11.253 ****** 2026-01-01 00:24:11.705538 | orchestrator | =============================================================================== 2026-01-01 00:24:11.705549 | orchestrator | Gathering Facts --------------------------------------------------------- 4.96s 2026-01-01 00:24:11.705560 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.22s 2026-01-01 00:24:11.705570 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.18s 2026-01-01 00:24:11.705581 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.17s 2026-01-01 00:24:11.705592 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.01s 2026-01-01 00:24:11.705602 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2026-01-01 00:24:11.705632 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2026-01-01 00:24:11.705644 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-01-01 00:24:11.705655 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-01-01 00:24:11.705666 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-01-01 00:24:11.705682 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-01-01 00:24:11.705693 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.08s 2026-01-01 00:24:11.705704 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-01-01 00:24:12.038366 | orchestrator | + osism apply sshconfig 2026-01-01 00:24:24.253773 | orchestrator | 2026-01-01 00:24:24 | INFO  | Task 08bb50f0-6ba5-4841-a459-00f4d262979d (sshconfig) was prepared for execution. 2026-01-01 00:24:24.253984 | orchestrator | 2026-01-01 00:24:24 | INFO  | It takes a moment until task 08bb50f0-6ba5-4841-a459-00f4d262979d (sshconfig) has been started and output is visible here. 2026-01-01 00:24:36.575518 | orchestrator | 2026-01-01 00:24:36.575662 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-01 00:24:36.575680 | orchestrator | 2026-01-01 00:24:36.575692 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-01 00:24:36.575705 | orchestrator | Thursday 01 January 2026 00:24:28 +0000 (0:00:00.163) 0:00:00.163 ****** 2026-01-01 00:24:36.575752 | orchestrator | ok: [testbed-manager] 2026-01-01 00:24:36.575766 | orchestrator | 2026-01-01 00:24:36.575778 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-01 00:24:36.575789 | orchestrator | Thursday 01 January 2026 00:24:29 +0000 (0:00:00.575) 0:00:00.738 ****** 2026-01-01 00:24:36.575800 | orchestrator | changed: [testbed-manager] 2026-01-01 00:24:36.575812 | orchestrator | 2026-01-01 00:24:36.575866 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-01 00:24:36.575878 | orchestrator | Thursday 01 January 2026 00:24:29 +0000 (0:00:00.526) 0:00:01.265 ****** 2026-01-01 00:24:36.575889 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-01 00:24:36.575901 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-01 00:24:36.575912 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-01 00:24:36.575923 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-01 00:24:36.575934 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-01 00:24:36.575945 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-01 00:24:36.575955 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-01 00:24:36.575966 | orchestrator | 2026-01-01 00:24:36.575977 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-01 00:24:36.575989 | orchestrator | Thursday 01 January 2026 00:24:35 +0000 (0:00:05.933) 0:00:07.198 ****** 2026-01-01 00:24:36.576000 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:24:36.576011 | orchestrator | 2026-01-01 00:24:36.576023 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-01 00:24:36.576036 | orchestrator | Thursday 01 January 2026 00:24:35 +0000 (0:00:00.081) 0:00:07.280 ****** 2026-01-01 00:24:36.576049 | orchestrator | changed: [testbed-manager] 2026-01-01 00:24:36.576061 | orchestrator | 2026-01-01 00:24:36.576074 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:24:36.576088 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:24:36.576102 | orchestrator | 2026-01-01 00:24:36.576115 | orchestrator | 2026-01-01 00:24:36.576127 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:24:36.576140 | orchestrator | Thursday 01 January 2026 00:24:36 +0000 (0:00:00.625) 0:00:07.905 ****** 2026-01-01 00:24:36.576153 | orchestrator | =============================================================================== 2026-01-01 00:24:36.576165 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.93s 2026-01-01 00:24:36.576179 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2026-01-01 00:24:36.576191 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2026-01-01 00:24:36.576204 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-01-01 00:24:36.576217 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-01-01 00:24:36.899776 | orchestrator | + osism apply known-hosts 2026-01-01 00:24:49.091007 | orchestrator | 2026-01-01 00:24:49 | INFO  | Task bf10210b-c07c-4cc0-bd77-1e53ce4de653 (known-hosts) was prepared for execution. 2026-01-01 00:24:49.091126 | orchestrator | 2026-01-01 00:24:49 | INFO  | It takes a moment until task bf10210b-c07c-4cc0-bd77-1e53ce4de653 (known-hosts) has been started and output is visible here. 2026-01-01 00:25:06.703642 | orchestrator | 2026-01-01 00:25:06.703742 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-01 00:25:06.703761 | orchestrator | 2026-01-01 00:25:06.703774 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-01 00:25:06.703786 | orchestrator | Thursday 01 January 2026 00:24:53 +0000 (0:00:00.183) 0:00:00.183 ****** 2026-01-01 00:25:06.703826 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-01 00:25:06.703858 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-01 00:25:06.703870 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-01 00:25:06.703881 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-01 00:25:06.703893 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-01 00:25:06.703904 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-01 00:25:06.703914 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-01 00:25:06.703925 | orchestrator | 2026-01-01 00:25:06.703946 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-01 00:25:06.703958 | orchestrator | Thursday 01 January 2026 00:24:59 +0000 (0:00:06.116) 0:00:06.300 ****** 2026-01-01 00:25:06.703970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-01 00:25:06.703983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-01 00:25:06.703994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-01 00:25:06.704005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-01 00:25:06.704016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-01 00:25:06.704027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-01 00:25:06.704038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-01 00:25:06.704049 | orchestrator | 2026-01-01 00:25:06.704060 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:06.704071 | orchestrator | Thursday 01 January 2026 00:24:59 +0000 (0:00:00.162) 0:00:06.462 ****** 2026-01-01 00:25:06.704082 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFZzUQPPwAw4NQ95thKlwbZsk60tjaYV5IlYjXcY/im8Chnr8wn4e/M2nR5jZmhEMKHHhBPREOIyZ5hiwqRFFOY=) 2026-01-01 00:25:06.704098 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRo93yQMu2AplZJ292YxjtkU1amxS1alQvu9Um66LItzzi4EU9WvwKaH9Zhwy2jBLf01L3xiHS9ZnPNL1eLYmtI6yGCSedjUwhUCObTBdpLaheVY9M9gvgOAzscUe6FTxiV+AkQTzl3okIcZpnhja/bwCvBP99laZDeVguM/pCS0xjVhZO7iotLo21EIzwTvWyVSettwNJGQJjvi+kV53os69q2PMsZvVMMmNbkegItL/UXi4sFpQC7Q18hrzu3noR7weYIIMtQzSMlaYEgL3FGb3zM09PNNzsOZ68dYLbMR7HBoN9Agv7Kl3l84zGPbe2rQJJS8hRkrh04yA/5iVcyAp9xluAc3v2E7sPjDEmz6wHIBOwAO9vpl9g+eCZEaGXlSBshrmnlc/YCZ2SMJfmJvmhUTXfOStsfoiG7cWGs/STlAWjJDFkzD3gSTUordVzl3tBIMij/Rw3P/Y+sxHZahwhB65pwhUN+9mIG7zZrwJ0+NRnv8fr7XUXPy6spp0=) 2026-01-01 00:25:06.704113 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJyXBCgzcloXMOXYaPGj+MW8wFUTfBhYhfDwAl2Phe/) 2026-01-01 00:25:06.704125 | orchestrator | 2026-01-01 00:25:06.704136 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:06.704147 | orchestrator | Thursday 01 January 2026 00:25:00 +0000 (0:00:01.204) 0:00:07.667 ****** 2026-01-01 00:25:06.704158 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMVgMxI3Kk6G5ev8wsumHKmMQvw4pb2xYZuQsQdfc6pk1PHx9GA3diAlPKp8D8mzNTw0fCOXnAlnQ8iLyXWDE7U=) 2026-01-01 00:25:06.704176 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKG2Fxyv4JfrbJwEP15rEFdWUxfjll2QTUDhrPw0WWt/) 2026-01-01 00:25:06.704214 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQmlzcCS8Kl5qMvCid2lJGq3AzJ9Ran0XMi3YEOKmDnqD1e1cxeqnNy3w7HTcw+OhYdMJr1Dlp+xCQD/l4qteqKSrOn2TP6qIHE/wxXlozuO1E0vs6vsGNoZLMfI3IMzcDl1Q1SoBFD6iavSin6fvozBTR1224s6hLW10Hjcwcz7G86x6LKKzgxLCX/FThq3TjEULnMAqcV8PHYBj0DaVbw6yfrntl2LCW55rJSti9HoCWZPLNZvdfGJw3cdzxCKTKNhJtrqz2nlLXgyJHg0mHf+GcApRyt3fqsP8YlmFbwoMVT72N14VKjiOcw90lKgKolEAZMhwyqRsG/6a95K5LSF8JyQA+KTzgqqs/Haeu1HszLHQf7swFFspWe5AHWn4tlEssfT0NTmdHLxfIi6zBx+MpJfj5OaggNNIAw7xjguMk5xbG3EkUUdMnOYoawxwW3FwolipRq3QxeP0vwxFw+HJx3EdkW6AzNtrZ+rQ0h+fI4d/uskqp74b9a3VcN2E=) 2026-01-01 00:25:06.704230 | orchestrator | 2026-01-01 00:25:06.704243 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:06.704257 | orchestrator | Thursday 01 January 2026 00:25:02 +0000 (0:00:01.095) 0:00:08.762 ****** 2026-01-01 00:25:06.704269 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJgtkMHh39cqKcMkiVS6aTiMHIxRZgP3aq/zMymiOogmL+CQjm7Cz0EIPLWSQmEgc8WMC7rpYTKj+SfQFK3oZMA=) 2026-01-01 00:25:06.704341 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkvcTC9Kn/bfd9o3UdyRTnAhmVDwbZh3P36t8fRIQPcjSh3E9iFCW0xv64RAhQd4fM1r+/aZ2WAa/aGSMEH/Yr4Qs3Bbq8IddgRy4qtxmJ8NP4samw+mm5RQcZj099bZOsNbSSPXggdCz9TbDYTjfOOH12g3xEQMcE/xMPCLQ0ZYy8J1qEJP8bUStKBKgxIN0KcjuOM1ZOWVnJXEnZwRnw3UuYj/BZGwM7ZRhxJLDyyUWeA1yuzKCAhB4rTh/N1aRG0zMZYoL9lG3Bw2rZFnKjg81XRM7756FaXXxpWRavU+XopRgF0xuS2v6ZDoxzl36DLlHNQLwpOeOx2obmubraOcANQH26XWa4nlmTxUdxZXbyi8T51jXiK0sUeuTe5Hn72B0Vb+I3PCP+UjZxSnQmAqKyxgF1tMeQ/9gdEeAmfKdM/BPud03jCnNzwUwaSJhLCd/NTu4ChTM9U9TCdYqVtcHr39X9jpm/O6OdifUtnloLAPVyhOi92N00wnUevmk=) 2026-01-01 00:25:06.704356 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAYP268K2IRU2xCGAB0oEBW999J6vOHYr3gPZljH0jyh) 2026-01-01 00:25:06.704369 | orchestrator | 2026-01-01 00:25:06.704383 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:06.704396 | orchestrator | Thursday 01 January 2026 00:25:03 +0000 (0:00:01.213) 0:00:09.976 ****** 2026-01-01 00:25:06.704414 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPqKozN/sOuXJFjAJmiQm2bzvlo9CJEmlO0BKm2afjhLtJJwaakba/yuX7gko7kPn7YqokKs0XuUSl/xoYCzM4aTgq08BFpqWRy/gqNb/eWd8hMzQ3aJt5pxQJ2U/LwH6Kh8QEevQc9Ak44PWSB2fr5RV6pgQiyjTr9cDhuzX7zxtiPnSuhRJM1mYKJf/NuWjYONqBzQvxCazUtI557Y2Ln+Uh203rDgGQFoghDnS3wkPKXMZioJ2r6NZl/2MKy5eBlRmzEX11dAhA3pRuUBqY6+6Vtn7A1ex7gI8H3ki2x3ch4Uwv8ldNfkOKQVgqlHHH9nLy6GBwjUgLX6UyuZgXdmSsVrkrOFQ94kn+gg6uUXKMJRqgrrRQjSn8V4CtEJWD57f9aPAVtLKRkwwYTtI7ZTIzzdLsxRp+iLT3DziXsJIP+YOnJgMpTr73Pc3gVirqBm4szmjzu4zSjjPH7hiSGTVsGKbDb7NqNWIriuYbuPxq1X9HiI1qDX0DDyJ8oxE=) 2026-01-01 00:25:06.704428 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGmEy6hEM98i6Hl33omcPDbdcGoo3xrnAHixPytM8y/D) 2026-01-01 00:25:06.704442 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGBYCK/o5gFQ/lIq6CwOyqcFM3cC3mEF6pxyBeRa+xllZcH644RHpQVe+Gt+Q5znXyn3To6evgWs0crJvT7yQQE=) 2026-01-01 00:25:06.704455 | orchestrator | 2026-01-01 00:25:06.704468 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:06.704482 | orchestrator | Thursday 01 January 2026 00:25:04 +0000 (0:00:01.125) 0:00:11.101 ****** 2026-01-01 00:25:06.704495 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCv7ZDotfzyOTJqJKs5bAXkF8gMSmJAxehyuM/hf7fvAh4Al/SZz+i9TZykTLEGzj0GAcNpodlvk6Gl24Kroyi8=) 2026-01-01 00:25:06.704509 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHZ1Tv7x81t6f/VjDrouWL3vAfWELqLuABQOAECPG3wD2fkJpN0aSgwJXL0lXqbASvu7UnIe+llK5j8qyC8C+cd0+xOyonTq58WyPI/L50qlfgj/y9aL6NVKxeAnDuYq1u+v9nrIjzHTbSzqwDLokqo2vvOuJOp/787y22AL61ICktL4FwHFRuN28GKFydrRLtZeB5MEKmYLIKyfZR29ss59zOLUq6A/DQm+mTgXuxpnm2NI+p6gCXc2zMFsHaj2ACYQfNftTxGZkMGabyJV7DNaQGfVMWU4XQcsW2EsvS5HtKzfJSXM5x34kmQ1yoG+gobsw8LsTcPTsJBMZfntzhHdcp0FUlyiWH90ehEL+XnMkQ1hTzX0Dc4wNlkk7WjMkIoKXLiPvE8p8qMbbqLKIhtz1yLRd+9ZIdODzEL8Ka5rL3qJALn6BIAO5Q5Y0RrWA2SOSAezL9zlx13WTEU+J1jxU2ooZB8RQTTZHgC2G9AY87YGJ/TT1M8KHi0UMfWgs=) 2026-01-01 00:25:06.704528 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEEF9maqlvOkojQzQAdzX3pTZArkTwIf2Nx4Z2icdRrC) 2026-01-01 00:25:06.704540 | orchestrator | 2026-01-01 00:25:06.704552 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:06.704562 | orchestrator | Thursday 01 January 2026 00:25:05 +0000 (0:00:01.173) 0:00:12.274 ****** 2026-01-01 00:25:06.704581 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIl3vc2al3nv57e6S+IZgBWd39rt4tOYXRS6zW9wr94x) 2026-01-01 00:25:18.193176 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr+5/YE0fO3LgDhCxomsIemIO9kuOAUHc/ClqkZsdXgTeydvR78BV77QZHdsBYeM+pd8AuToDTxFX6TjHoe0Ojrq0+xl7VWO7GRQSRJxdoReeqcTTa1g0nV0M8+ziTnMoabU3LhfLssQMb5A2ayjpnBkrnOAjZsDyK+1GMKqUvpU5Iac32Ryeg3GFZEX9xWX7mbP34Itf+7HyaHg7emRUMSEAZJfjAMVJS0Jf2sG4Q+/k0NCm/RtFk9Vyx73q1lUXbZb30MROY/mtvT44crZb9XIxc78CIwODh9eeadjjSsWs7e/NR5OH7UCiuAUKmM70ZomSENpiin9YR672Oqlit4WdQG5BJ3ZimIJJ/Zl6yEIPYunArFxMLGvj71UrF7CDjyEUMtQnHpBoXEWgXNdycpNsvOktImvhv+OiMyrcrDwnF5vhp3EBkKTiEtpgukXdDZ2zMFOu9VR5MHBYEKD74fuSFosfNa4K2n2Yxvd7Y48yuReOD/kzWOfQWWzX0Rbk=) 2026-01-01 00:25:18.193294 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGd+xyBMzae8BP8baNeiboGEKxYvUDqJynl4EO8cEfh0hYUofO69OLZQDqWeoI9rYy5ReecJTwwpFPtjUB09kt0=) 2026-01-01 00:25:18.193314 | orchestrator | 2026-01-01 00:25:18.193327 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:18.193340 | orchestrator | Thursday 01 January 2026 00:25:06 +0000 (0:00:01.118) 0:00:13.393 ****** 2026-01-01 00:25:18.193351 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII5ZxOXRgXpe4aOsD5CD7eyazJObVwWYTRp1E0PqEh1S) 2026-01-01 00:25:18.193366 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0Rj/9Rp7HJKnsvfu+XkcI3LDGAx730KR43xLKsd6InkQr6sb4+jfVRdmqjHnof+Jwckt6tHNo5i3Oog5z3fe/OLvkUWsbxJz2MpK9hRYXTVawQFHMhYZB2UGkC9mw+L+wkcFvDwFW1pyeXMobdLH+F+P98fSuOcL598qAVfe7v2BnmTFHOW46cS+gLPmsB9vZjvLcM2JVK5mj3MjRgGirrZnJceO0q40GIQeumDsecrJXYyYlW72NHzy6aGcnDTGPzxmMhMIehR8zIL3Vh3yv0MyRABgotfF4KFx01KFMqXLVfVuQNm+2SAdTKDbGkcJmAVWCI30drK8xfUi8sndWz1+BFTzCm5dIjO7TgwWMn4Lbd/7vKtw86Jw/8V9Dv8jOD5wUKKSCEPlwy5Vb/O8Jt3SDENteoQqf3guBMCq7BLwKo1wjfMyEq44PjAFkqdLUTPU3WO7vntDac9Ri3N/NeTUpq9bEVjQd78NossXCK6t2u8tyQ0UYaKIaHmInsb0=) 2026-01-01 00:25:18.193378 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBjZ+iGd2xSNm23asic3C8M3v2ONt2kkf6YY4abOyWH9ocgw5DxdcJPJWe/ygGJOPYZxRukfVehaGSs8yV6TwxI=) 2026-01-01 00:25:18.193389 | orchestrator | 2026-01-01 00:25:18.193401 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-01 00:25:18.193413 | orchestrator | Thursday 01 January 2026 00:25:07 +0000 (0:00:01.115) 0:00:14.508 ****** 2026-01-01 00:25:18.193425 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-01 00:25:18.193437 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-01 00:25:18.193448 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-01 00:25:18.193477 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-01 00:25:18.193489 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-01 00:25:18.193523 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-01 00:25:18.193535 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-01 00:25:18.193547 | orchestrator | 2026-01-01 00:25:18.193559 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-01 00:25:18.193571 | orchestrator | Thursday 01 January 2026 00:25:13 +0000 (0:00:05.590) 0:00:20.099 ****** 2026-01-01 00:25:18.193584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-01 00:25:18.193597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-01 00:25:18.193608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-01 00:25:18.193619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-01 00:25:18.193630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-01 00:25:18.193641 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-01 00:25:18.193652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-01 00:25:18.193663 | orchestrator | 2026-01-01 00:25:18.193689 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:18.193701 | orchestrator | Thursday 01 January 2026 00:25:13 +0000 (0:00:00.206) 0:00:20.306 ****** 2026-01-01 00:25:18.193715 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJyXBCgzcloXMOXYaPGj+MW8wFUTfBhYhfDwAl2Phe/) 2026-01-01 00:25:18.193731 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRo93yQMu2AplZJ292YxjtkU1amxS1alQvu9Um66LItzzi4EU9WvwKaH9Zhwy2jBLf01L3xiHS9ZnPNL1eLYmtI6yGCSedjUwhUCObTBdpLaheVY9M9gvgOAzscUe6FTxiV+AkQTzl3okIcZpnhja/bwCvBP99laZDeVguM/pCS0xjVhZO7iotLo21EIzwTvWyVSettwNJGQJjvi+kV53os69q2PMsZvVMMmNbkegItL/UXi4sFpQC7Q18hrzu3noR7weYIIMtQzSMlaYEgL3FGb3zM09PNNzsOZ68dYLbMR7HBoN9Agv7Kl3l84zGPbe2rQJJS8hRkrh04yA/5iVcyAp9xluAc3v2E7sPjDEmz6wHIBOwAO9vpl9g+eCZEaGXlSBshrmnlc/YCZ2SMJfmJvmhUTXfOStsfoiG7cWGs/STlAWjJDFkzD3gSTUordVzl3tBIMij/Rw3P/Y+sxHZahwhB65pwhUN+9mIG7zZrwJ0+NRnv8fr7XUXPy6spp0=) 2026-01-01 00:25:18.193745 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFZzUQPPwAw4NQ95thKlwbZsk60tjaYV5IlYjXcY/im8Chnr8wn4e/M2nR5jZmhEMKHHhBPREOIyZ5hiwqRFFOY=) 2026-01-01 00:25:18.193758 | orchestrator | 2026-01-01 00:25:18.193771 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:18.193783 | orchestrator | Thursday 01 January 2026 00:25:14 +0000 (0:00:01.184) 0:00:21.490 ****** 2026-01-01 00:25:18.193825 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQmlzcCS8Kl5qMvCid2lJGq3AzJ9Ran0XMi3YEOKmDnqD1e1cxeqnNy3w7HTcw+OhYdMJr1Dlp+xCQD/l4qteqKSrOn2TP6qIHE/wxXlozuO1E0vs6vsGNoZLMfI3IMzcDl1Q1SoBFD6iavSin6fvozBTR1224s6hLW10Hjcwcz7G86x6LKKzgxLCX/FThq3TjEULnMAqcV8PHYBj0DaVbw6yfrntl2LCW55rJSti9HoCWZPLNZvdfGJw3cdzxCKTKNhJtrqz2nlLXgyJHg0mHf+GcApRyt3fqsP8YlmFbwoMVT72N14VKjiOcw90lKgKolEAZMhwyqRsG/6a95K5LSF8JyQA+KTzgqqs/Haeu1HszLHQf7swFFspWe5AHWn4tlEssfT0NTmdHLxfIi6zBx+MpJfj5OaggNNIAw7xjguMk5xbG3EkUUdMnOYoawxwW3FwolipRq3QxeP0vwxFw+HJx3EdkW6AzNtrZ+rQ0h+fI4d/uskqp74b9a3VcN2E=) 2026-01-01 00:25:18.193848 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMVgMxI3Kk6G5ev8wsumHKmMQvw4pb2xYZuQsQdfc6pk1PHx9GA3diAlPKp8D8mzNTw0fCOXnAlnQ8iLyXWDE7U=) 2026-01-01 00:25:18.193863 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKG2Fxyv4JfrbJwEP15rEFdWUxfjll2QTUDhrPw0WWt/) 2026-01-01 00:25:18.193876 | orchestrator | 2026-01-01 00:25:18.193887 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:18.193898 | orchestrator | Thursday 01 January 2026 00:25:15 +0000 (0:00:01.168) 0:00:22.659 ****** 2026-01-01 00:25:18.193910 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkvcTC9Kn/bfd9o3UdyRTnAhmVDwbZh3P36t8fRIQPcjSh3E9iFCW0xv64RAhQd4fM1r+/aZ2WAa/aGSMEH/Yr4Qs3Bbq8IddgRy4qtxmJ8NP4samw+mm5RQcZj099bZOsNbSSPXggdCz9TbDYTjfOOH12g3xEQMcE/xMPCLQ0ZYy8J1qEJP8bUStKBKgxIN0KcjuOM1ZOWVnJXEnZwRnw3UuYj/BZGwM7ZRhxJLDyyUWeA1yuzKCAhB4rTh/N1aRG0zMZYoL9lG3Bw2rZFnKjg81XRM7756FaXXxpWRavU+XopRgF0xuS2v6ZDoxzl36DLlHNQLwpOeOx2obmubraOcANQH26XWa4nlmTxUdxZXbyi8T51jXiK0sUeuTe5Hn72B0Vb+I3PCP+UjZxSnQmAqKyxgF1tMeQ/9gdEeAmfKdM/BPud03jCnNzwUwaSJhLCd/NTu4ChTM9U9TCdYqVtcHr39X9jpm/O6OdifUtnloLAPVyhOi92N00wnUevmk=) 2026-01-01 00:25:18.193921 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAYP268K2IRU2xCGAB0oEBW999J6vOHYr3gPZljH0jyh) 2026-01-01 00:25:18.193932 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJgtkMHh39cqKcMkiVS6aTiMHIxRZgP3aq/zMymiOogmL+CQjm7Cz0EIPLWSQmEgc8WMC7rpYTKj+SfQFK3oZMA=) 2026-01-01 00:25:18.193943 | orchestrator | 2026-01-01 00:25:18.193954 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:18.193965 | orchestrator | Thursday 01 January 2026 00:25:17 +0000 (0:00:01.115) 0:00:23.774 ****** 2026-01-01 00:25:18.193975 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGmEy6hEM98i6Hl33omcPDbdcGoo3xrnAHixPytM8y/D) 2026-01-01 00:25:18.194012 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPqKozN/sOuXJFjAJmiQm2bzvlo9CJEmlO0BKm2afjhLtJJwaakba/yuX7gko7kPn7YqokKs0XuUSl/xoYCzM4aTgq08BFpqWRy/gqNb/eWd8hMzQ3aJt5pxQJ2U/LwH6Kh8QEevQc9Ak44PWSB2fr5RV6pgQiyjTr9cDhuzX7zxtiPnSuhRJM1mYKJf/NuWjYONqBzQvxCazUtI557Y2Ln+Uh203rDgGQFoghDnS3wkPKXMZioJ2r6NZl/2MKy5eBlRmzEX11dAhA3pRuUBqY6+6Vtn7A1ex7gI8H3ki2x3ch4Uwv8ldNfkOKQVgqlHHH9nLy6GBwjUgLX6UyuZgXdmSsVrkrOFQ94kn+gg6uUXKMJRqgrrRQjSn8V4CtEJWD57f9aPAVtLKRkwwYTtI7ZTIzzdLsxRp+iLT3DziXsJIP+YOnJgMpTr73Pc3gVirqBm4szmjzu4zSjjPH7hiSGTVsGKbDb7NqNWIriuYbuPxq1X9HiI1qDX0DDyJ8oxE=) 2026-01-01 00:25:23.018785 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGBYCK/o5gFQ/lIq6CwOyqcFM3cC3mEF6pxyBeRa+xllZcH644RHpQVe+Gt+Q5znXyn3To6evgWs0crJvT7yQQE=) 2026-01-01 00:25:23.018929 | orchestrator | 2026-01-01 00:25:23.018944 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:23.018957 | orchestrator | Thursday 01 January 2026 00:25:18 +0000 (0:00:01.111) 0:00:24.886 ****** 2026-01-01 00:25:23.018969 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCv7ZDotfzyOTJqJKs5bAXkF8gMSmJAxehyuM/hf7fvAh4Al/SZz+i9TZykTLEGzj0GAcNpodlvk6Gl24Kroyi8=) 2026-01-01 00:25:23.018982 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEEF9maqlvOkojQzQAdzX3pTZArkTwIf2Nx4Z2icdRrC) 2026-01-01 00:25:23.018997 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHZ1Tv7x81t6f/VjDrouWL3vAfWELqLuABQOAECPG3wD2fkJpN0aSgwJXL0lXqbASvu7UnIe+llK5j8qyC8C+cd0+xOyonTq58WyPI/L50qlfgj/y9aL6NVKxeAnDuYq1u+v9nrIjzHTbSzqwDLokqo2vvOuJOp/787y22AL61ICktL4FwHFRuN28GKFydrRLtZeB5MEKmYLIKyfZR29ss59zOLUq6A/DQm+mTgXuxpnm2NI+p6gCXc2zMFsHaj2ACYQfNftTxGZkMGabyJV7DNaQGfVMWU4XQcsW2EsvS5HtKzfJSXM5x34kmQ1yoG+gobsw8LsTcPTsJBMZfntzhHdcp0FUlyiWH90ehEL+XnMkQ1hTzX0Dc4wNlkk7WjMkIoKXLiPvE8p8qMbbqLKIhtz1yLRd+9ZIdODzEL8Ka5rL3qJALn6BIAO5Q5Y0RrWA2SOSAezL9zlx13WTEU+J1jxU2ooZB8RQTTZHgC2G9AY87YGJ/TT1M8KHi0UMfWgs=) 2026-01-01 00:25:23.019040 | orchestrator | 2026-01-01 00:25:23.019052 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:23.019063 | orchestrator | Thursday 01 January 2026 00:25:19 +0000 (0:00:01.146) 0:00:26.032 ****** 2026-01-01 00:25:23.019090 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr+5/YE0fO3LgDhCxomsIemIO9kuOAUHc/ClqkZsdXgTeydvR78BV77QZHdsBYeM+pd8AuToDTxFX6TjHoe0Ojrq0+xl7VWO7GRQSRJxdoReeqcTTa1g0nV0M8+ziTnMoabU3LhfLssQMb5A2ayjpnBkrnOAjZsDyK+1GMKqUvpU5Iac32Ryeg3GFZEX9xWX7mbP34Itf+7HyaHg7emRUMSEAZJfjAMVJS0Jf2sG4Q+/k0NCm/RtFk9Vyx73q1lUXbZb30MROY/mtvT44crZb9XIxc78CIwODh9eeadjjSsWs7e/NR5OH7UCiuAUKmM70ZomSENpiin9YR672Oqlit4WdQG5BJ3ZimIJJ/Zl6yEIPYunArFxMLGvj71UrF7CDjyEUMtQnHpBoXEWgXNdycpNsvOktImvhv+OiMyrcrDwnF5vhp3EBkKTiEtpgukXdDZ2zMFOu9VR5MHBYEKD74fuSFosfNa4K2n2Yxvd7Y48yuReOD/kzWOfQWWzX0Rbk=) 2026-01-01 00:25:23.019102 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIl3vc2al3nv57e6S+IZgBWd39rt4tOYXRS6zW9wr94x) 2026-01-01 00:25:23.019114 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGd+xyBMzae8BP8baNeiboGEKxYvUDqJynl4EO8cEfh0hYUofO69OLZQDqWeoI9rYy5ReecJTwwpFPtjUB09kt0=) 2026-01-01 00:25:23.019125 | orchestrator | 2026-01-01 00:25:23.019136 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:25:23.019147 | orchestrator | Thursday 01 January 2026 00:25:20 +0000 (0:00:01.181) 0:00:27.213 ****** 2026-01-01 00:25:23.019158 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBjZ+iGd2xSNm23asic3C8M3v2ONt2kkf6YY4abOyWH9ocgw5DxdcJPJWe/ygGJOPYZxRukfVehaGSs8yV6TwxI=) 2026-01-01 00:25:23.019169 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII5ZxOXRgXpe4aOsD5CD7eyazJObVwWYTRp1E0PqEh1S) 2026-01-01 00:25:23.019180 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0Rj/9Rp7HJKnsvfu+XkcI3LDGAx730KR43xLKsd6InkQr6sb4+jfVRdmqjHnof+Jwckt6tHNo5i3Oog5z3fe/OLvkUWsbxJz2MpK9hRYXTVawQFHMhYZB2UGkC9mw+L+wkcFvDwFW1pyeXMobdLH+F+P98fSuOcL598qAVfe7v2BnmTFHOW46cS+gLPmsB9vZjvLcM2JVK5mj3MjRgGirrZnJceO0q40GIQeumDsecrJXYyYlW72NHzy6aGcnDTGPzxmMhMIehR8zIL3Vh3yv0MyRABgotfF4KFx01KFMqXLVfVuQNm+2SAdTKDbGkcJmAVWCI30drK8xfUi8sndWz1+BFTzCm5dIjO7TgwWMn4Lbd/7vKtw86Jw/8V9Dv8jOD5wUKKSCEPlwy5Vb/O8Jt3SDENteoQqf3guBMCq7BLwKo1wjfMyEq44PjAFkqdLUTPU3WO7vntDac9Ri3N/NeTUpq9bEVjQd78NossXCK6t2u8tyQ0UYaKIaHmInsb0=) 2026-01-01 00:25:23.019192 | orchestrator | 2026-01-01 00:25:23.019203 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-01 00:25:23.019213 | orchestrator | Thursday 01 January 2026 00:25:21 +0000 (0:00:01.178) 0:00:28.392 ****** 2026-01-01 00:25:23.019225 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-01 00:25:23.019237 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-01 00:25:23.019248 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-01 00:25:23.019259 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-01 00:25:23.019286 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-01 00:25:23.019298 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-01 00:25:23.019312 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-01 00:25:23.019326 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:25:23.019339 | orchestrator | 2026-01-01 00:25:23.019353 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-01 00:25:23.019373 | orchestrator | Thursday 01 January 2026 00:25:21 +0000 (0:00:00.179) 0:00:28.572 ****** 2026-01-01 00:25:23.019387 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:25:23.019399 | orchestrator | 2026-01-01 00:25:23.019412 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-01 00:25:23.019425 | orchestrator | Thursday 01 January 2026 00:25:21 +0000 (0:00:00.048) 0:00:28.620 ****** 2026-01-01 00:25:23.019439 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:25:23.019452 | orchestrator | 2026-01-01 00:25:23.019466 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-01 00:25:23.019478 | orchestrator | Thursday 01 January 2026 00:25:21 +0000 (0:00:00.055) 0:00:28.676 ****** 2026-01-01 00:25:23.019491 | orchestrator | changed: [testbed-manager] 2026-01-01 00:25:23.019505 | orchestrator | 2026-01-01 00:25:23.019517 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:25:23.019531 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-01 00:25:23.019545 | orchestrator | 2026-01-01 00:25:23.019558 | orchestrator | 2026-01-01 00:25:23.019571 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:25:23.019584 | orchestrator | Thursday 01 January 2026 00:25:22 +0000 (0:00:00.799) 0:00:29.475 ****** 2026-01-01 00:25:23.019598 | orchestrator | =============================================================================== 2026-01-01 00:25:23.019611 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.12s 2026-01-01 00:25:23.019625 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.59s 2026-01-01 00:25:23.019639 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-01-01 00:25:23.019652 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-01-01 00:25:23.019666 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-01-01 00:25:23.019677 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-01-01 00:25:23.019688 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-01-01 00:25:23.019699 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-01-01 00:25:23.019710 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-01-01 00:25:23.019721 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-01-01 00:25:23.019732 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-01 00:25:23.019742 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-01 00:25:23.019753 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-01 00:25:23.019771 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-01 00:25:23.019783 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-01-01 00:25:23.019828 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-01 00:25:23.019846 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.80s 2026-01-01 00:25:23.019863 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.21s 2026-01-01 00:25:23.019881 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-01-01 00:25:23.019899 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-01-01 00:25:23.375626 | orchestrator | + osism apply squid 2026-01-01 00:25:35.546174 | orchestrator | 2026-01-01 00:25:35 | INFO  | Task 83e2a5b3-12c3-42bc-a618-4cd51bfda030 (squid) was prepared for execution. 2026-01-01 00:25:35.546295 | orchestrator | 2026-01-01 00:25:35 | INFO  | It takes a moment until task 83e2a5b3-12c3-42bc-a618-4cd51bfda030 (squid) has been started and output is visible here. 2026-01-01 00:27:32.286860 | orchestrator | 2026-01-01 00:27:32.287000 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-01 00:27:32.287016 | orchestrator | 2026-01-01 00:27:32.287024 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-01 00:27:32.287032 | orchestrator | Thursday 01 January 2026 00:25:39 +0000 (0:00:00.174) 0:00:00.174 ****** 2026-01-01 00:27:32.287040 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:27:32.287049 | orchestrator | 2026-01-01 00:27:32.287056 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-01 00:27:32.287065 | orchestrator | Thursday 01 January 2026 00:25:40 +0000 (0:00:00.089) 0:00:00.264 ****** 2026-01-01 00:27:32.287110 | orchestrator | ok: [testbed-manager] 2026-01-01 00:27:32.287122 | orchestrator | 2026-01-01 00:27:32.287130 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-01 00:27:32.287138 | orchestrator | Thursday 01 January 2026 00:25:41 +0000 (0:00:01.685) 0:00:01.949 ****** 2026-01-01 00:27:32.287147 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-01 00:27:32.287155 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-01 00:27:32.287162 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-01 00:27:32.287169 | orchestrator | 2026-01-01 00:27:32.287176 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-01 00:27:32.287183 | orchestrator | Thursday 01 January 2026 00:25:42 +0000 (0:00:01.224) 0:00:03.173 ****** 2026-01-01 00:27:32.287191 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-01 00:27:32.287199 | orchestrator | 2026-01-01 00:27:32.287206 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-01 00:27:32.287213 | orchestrator | Thursday 01 January 2026 00:25:44 +0000 (0:00:01.197) 0:00:04.371 ****** 2026-01-01 00:27:32.287220 | orchestrator | ok: [testbed-manager] 2026-01-01 00:27:32.287227 | orchestrator | 2026-01-01 00:27:32.287234 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-01 00:27:32.287242 | orchestrator | Thursday 01 January 2026 00:25:44 +0000 (0:00:00.376) 0:00:04.748 ****** 2026-01-01 00:27:32.287248 | orchestrator | changed: [testbed-manager] 2026-01-01 00:27:32.287256 | orchestrator | 2026-01-01 00:27:32.287263 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-01 00:27:32.287271 | orchestrator | Thursday 01 January 2026 00:25:45 +0000 (0:00:00.982) 0:00:05.730 ****** 2026-01-01 00:27:32.287278 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-01 00:27:32.287286 | orchestrator | ok: [testbed-manager] 2026-01-01 00:27:32.287293 | orchestrator | 2026-01-01 00:27:32.287300 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-01 00:27:32.287307 | orchestrator | Thursday 01 January 2026 00:26:18 +0000 (0:00:33.503) 0:00:39.234 ****** 2026-01-01 00:27:32.287314 | orchestrator | changed: [testbed-manager] 2026-01-01 00:27:32.287321 | orchestrator | 2026-01-01 00:27:32.287328 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-01 00:27:32.287335 | orchestrator | Thursday 01 January 2026 00:26:31 +0000 (0:00:12.180) 0:00:51.414 ****** 2026-01-01 00:27:32.287342 | orchestrator | Pausing for 60 seconds 2026-01-01 00:27:32.287350 | orchestrator | changed: [testbed-manager] 2026-01-01 00:27:32.287357 | orchestrator | 2026-01-01 00:27:32.287365 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-01 00:27:32.287373 | orchestrator | Thursday 01 January 2026 00:27:31 +0000 (0:01:00.094) 0:01:51.508 ****** 2026-01-01 00:27:32.287379 | orchestrator | ok: [testbed-manager] 2026-01-01 00:27:32.287385 | orchestrator | 2026-01-01 00:27:32.287392 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-01 00:27:32.287429 | orchestrator | Thursday 01 January 2026 00:27:31 +0000 (0:00:00.076) 0:01:51.585 ****** 2026-01-01 00:27:32.287438 | orchestrator | changed: [testbed-manager] 2026-01-01 00:27:32.287445 | orchestrator | 2026-01-01 00:27:32.287454 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:27:32.287463 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:27:32.287472 | orchestrator | 2026-01-01 00:27:32.287479 | orchestrator | 2026-01-01 00:27:32.287487 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:27:32.287494 | orchestrator | Thursday 01 January 2026 00:27:31 +0000 (0:00:00.662) 0:01:52.248 ****** 2026-01-01 00:27:32.287502 | orchestrator | =============================================================================== 2026-01-01 00:27:32.287510 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-01-01 00:27:32.287518 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.50s 2026-01-01 00:27:32.287526 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.18s 2026-01-01 00:27:32.287534 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.69s 2026-01-01 00:27:32.287542 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2026-01-01 00:27:32.287549 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.20s 2026-01-01 00:27:32.287557 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.98s 2026-01-01 00:27:32.287564 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-01-01 00:27:32.287571 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-01-01 00:27:32.287578 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-01-01 00:27:32.287586 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-01-01 00:27:32.621453 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-01 00:27:32.621547 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-01 00:27:32.630243 | orchestrator | + set -e 2026-01-01 00:27:32.630289 | orchestrator | + NAMESPACE=kolla 2026-01-01 00:27:32.630295 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-01 00:27:32.635193 | orchestrator | ++ semver latest 9.0.0 2026-01-01 00:27:32.703340 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-01 00:27:32.703425 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-01 00:27:32.704154 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-01 00:27:44.893279 | orchestrator | 2026-01-01 00:27:44 | INFO  | Task c38077b7-9aca-4582-a258-351aa23df06f (operator) was prepared for execution. 2026-01-01 00:27:44.893402 | orchestrator | 2026-01-01 00:27:44 | INFO  | It takes a moment until task c38077b7-9aca-4582-a258-351aa23df06f (operator) has been started and output is visible here. 2026-01-01 00:28:01.906294 | orchestrator | 2026-01-01 00:28:01.906447 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-01 00:28:01.906465 | orchestrator | 2026-01-01 00:28:01.906477 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:28:01.906490 | orchestrator | Thursday 01 January 2026 00:27:49 +0000 (0:00:00.165) 0:00:00.165 ****** 2026-01-01 00:28:01.906503 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:01.906518 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:01.906530 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:01.906541 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:01.906552 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:01.906567 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:01.906579 | orchestrator | 2026-01-01 00:28:01.906590 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-01 00:28:01.906601 | orchestrator | Thursday 01 January 2026 00:27:52 +0000 (0:00:03.564) 0:00:03.730 ****** 2026-01-01 00:28:01.906642 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:01.906654 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:01.906665 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:01.906676 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:01.906687 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:01.906698 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:01.906709 | orchestrator | 2026-01-01 00:28:01.906756 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-01 00:28:01.906775 | orchestrator | 2026-01-01 00:28:01.906794 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-01 00:28:01.906813 | orchestrator | Thursday 01 January 2026 00:27:53 +0000 (0:00:00.877) 0:00:04.607 ****** 2026-01-01 00:28:01.906833 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:01.906852 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:01.906871 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:01.906890 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:01.906908 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:01.906926 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:01.906944 | orchestrator | 2026-01-01 00:28:01.906964 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-01 00:28:01.906983 | orchestrator | Thursday 01 January 2026 00:27:53 +0000 (0:00:00.210) 0:00:04.818 ****** 2026-01-01 00:28:01.907003 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:01.907023 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:01.907043 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:01.907062 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:01.907080 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:01.907099 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:01.907117 | orchestrator | 2026-01-01 00:28:01.907136 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-01 00:28:01.907175 | orchestrator | Thursday 01 January 2026 00:27:54 +0000 (0:00:00.219) 0:00:05.038 ****** 2026-01-01 00:28:01.907188 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:01.907200 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:01.907211 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:01.907228 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:01.907239 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:01.907250 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:01.907261 | orchestrator | 2026-01-01 00:28:01.907272 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-01 00:28:01.907283 | orchestrator | Thursday 01 January 2026 00:27:54 +0000 (0:00:00.668) 0:00:05.706 ****** 2026-01-01 00:28:01.907294 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:01.907304 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:01.907315 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:01.907326 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:01.907337 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:01.907348 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:01.907358 | orchestrator | 2026-01-01 00:28:01.907369 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-01 00:28:01.907380 | orchestrator | Thursday 01 January 2026 00:27:55 +0000 (0:00:00.799) 0:00:06.506 ****** 2026-01-01 00:28:01.907391 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-01 00:28:01.907402 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-01 00:28:01.907413 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-01 00:28:01.907424 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-01 00:28:01.907435 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-01 00:28:01.907445 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-01 00:28:01.907456 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-01 00:28:01.907467 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-01 00:28:01.907477 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-01 00:28:01.907488 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-01 00:28:01.907509 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-01 00:28:01.907520 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-01 00:28:01.907530 | orchestrator | 2026-01-01 00:28:01.907541 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-01 00:28:01.907552 | orchestrator | Thursday 01 January 2026 00:27:56 +0000 (0:00:01.220) 0:00:07.727 ****** 2026-01-01 00:28:01.907563 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:01.907574 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:01.907585 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:01.907596 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:01.907607 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:01.907618 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:01.907628 | orchestrator | 2026-01-01 00:28:01.907639 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-01 00:28:01.907651 | orchestrator | Thursday 01 January 2026 00:27:58 +0000 (0:00:01.247) 0:00:08.974 ****** 2026-01-01 00:28:01.907662 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-01 00:28:01.907673 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-01 00:28:01.907684 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-01 00:28:01.907695 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:28:01.907758 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:28:01.907778 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:28:01.907797 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:28:01.907817 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:28:01.907835 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:28:01.907854 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-01 00:28:01.907866 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-01 00:28:01.907877 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-01 00:28:01.907887 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-01 00:28:01.907898 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-01 00:28:01.907908 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-01 00:28:01.907919 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:28:01.907930 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:28:01.907940 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:28:01.907951 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:28:01.907962 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:28:01.907973 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:28:01.907983 | orchestrator | 2026-01-01 00:28:01.907994 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-01 00:28:01.908006 | orchestrator | Thursday 01 January 2026 00:27:59 +0000 (0:00:01.317) 0:00:10.292 ****** 2026-01-01 00:28:01.908017 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:01.908028 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:01.908039 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:01.908049 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:01.908060 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:01.908071 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:01.908081 | orchestrator | 2026-01-01 00:28:01.908092 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-01 00:28:01.908103 | orchestrator | Thursday 01 January 2026 00:27:59 +0000 (0:00:00.188) 0:00:10.480 ****** 2026-01-01 00:28:01.908122 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:01.908133 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:01.908144 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:01.908155 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:01.908165 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:01.908176 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:01.908187 | orchestrator | 2026-01-01 00:28:01.908198 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-01 00:28:01.908209 | orchestrator | Thursday 01 January 2026 00:27:59 +0000 (0:00:00.196) 0:00:10.677 ****** 2026-01-01 00:28:01.908219 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:01.908230 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:01.908241 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:01.908252 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:01.908263 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:01.908273 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:01.908284 | orchestrator | 2026-01-01 00:28:01.908294 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-01 00:28:01.908305 | orchestrator | Thursday 01 January 2026 00:28:00 +0000 (0:00:00.682) 0:00:11.360 ****** 2026-01-01 00:28:01.908316 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:01.908327 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:01.908338 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:01.908348 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:01.908359 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:01.908370 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:01.908380 | orchestrator | 2026-01-01 00:28:01.908391 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-01 00:28:01.908402 | orchestrator | Thursday 01 January 2026 00:28:00 +0000 (0:00:00.221) 0:00:11.581 ****** 2026-01-01 00:28:01.908413 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-01 00:28:01.908424 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:01.908435 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-01 00:28:01.908446 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:01.908456 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-01 00:28:01.908467 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:01.908478 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 00:28:01.908488 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:01.908499 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-01 00:28:01.908510 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:01.908521 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-01 00:28:01.908531 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:01.908542 | orchestrator | 2026-01-01 00:28:01.908553 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-01 00:28:01.908564 | orchestrator | Thursday 01 January 2026 00:28:01 +0000 (0:00:00.806) 0:00:12.387 ****** 2026-01-01 00:28:01.908575 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:01.908585 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:01.908596 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:01.908606 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:01.908617 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:01.908628 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:01.908639 | orchestrator | 2026-01-01 00:28:01.908649 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-01 00:28:01.908660 | orchestrator | Thursday 01 January 2026 00:28:01 +0000 (0:00:00.205) 0:00:12.593 ****** 2026-01-01 00:28:01.908671 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:01.908682 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:01.908693 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:01.908703 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:01.908944 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:03.487105 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:03.487252 | orchestrator | 2026-01-01 00:28:03.487267 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-01 00:28:03.487278 | orchestrator | Thursday 01 January 2026 00:28:01 +0000 (0:00:00.208) 0:00:12.802 ****** 2026-01-01 00:28:03.487286 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:03.487293 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:03.487300 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:03.487307 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:03.487313 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:03.487320 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:03.487327 | orchestrator | 2026-01-01 00:28:03.487334 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-01 00:28:03.487340 | orchestrator | Thursday 01 January 2026 00:28:02 +0000 (0:00:00.217) 0:00:13.020 ****** 2026-01-01 00:28:03.487347 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:03.487354 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:03.487360 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:03.487367 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:03.487374 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:03.487380 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:03.487387 | orchestrator | 2026-01-01 00:28:03.487393 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-01 00:28:03.487399 | orchestrator | Thursday 01 January 2026 00:28:02 +0000 (0:00:00.789) 0:00:13.810 ****** 2026-01-01 00:28:03.487406 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:03.487413 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:03.487419 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:03.487426 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:03.487432 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:03.487439 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:03.487445 | orchestrator | 2026-01-01 00:28:03.487451 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:28:03.487459 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:28:03.487466 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:28:03.487493 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:28:03.487503 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:28:03.487511 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:28:03.487518 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:28:03.487524 | orchestrator | 2026-01-01 00:28:03.487531 | orchestrator | 2026-01-01 00:28:03.487538 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:28:03.487544 | orchestrator | Thursday 01 January 2026 00:28:03 +0000 (0:00:00.276) 0:00:14.086 ****** 2026-01-01 00:28:03.487551 | orchestrator | =============================================================================== 2026-01-01 00:28:03.487557 | orchestrator | Gathering Facts --------------------------------------------------------- 3.56s 2026-01-01 00:28:03.487564 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.32s 2026-01-01 00:28:03.487572 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2026-01-01 00:28:03.487579 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2026-01-01 00:28:03.487593 | orchestrator | Do not require tty for all users ---------------------------------------- 0.88s 2026-01-01 00:28:03.487600 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.81s 2026-01-01 00:28:03.487606 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2026-01-01 00:28:03.487613 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.79s 2026-01-01 00:28:03.487619 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.68s 2026-01-01 00:28:03.487625 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2026-01-01 00:28:03.487632 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.28s 2026-01-01 00:28:03.487640 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2026-01-01 00:28:03.487647 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.22s 2026-01-01 00:28:03.487654 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.22s 2026-01-01 00:28:03.487660 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.21s 2026-01-01 00:28:03.487668 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.21s 2026-01-01 00:28:03.487678 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.21s 2026-01-01 00:28:03.487687 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-01-01 00:28:03.487698 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.19s 2026-01-01 00:28:03.881137 | orchestrator | + osism apply --environment custom facts 2026-01-01 00:28:06.042892 | orchestrator | 2026-01-01 00:28:06 | INFO  | Trying to run play facts in environment custom 2026-01-01 00:28:16.154245 | orchestrator | 2026-01-01 00:28:16 | INFO  | Task 5443a66d-0e22-4b7d-a6bf-98b441a2cccf (facts) was prepared for execution. 2026-01-01 00:28:16.154385 | orchestrator | 2026-01-01 00:28:16 | INFO  | It takes a moment until task 5443a66d-0e22-4b7d-a6bf-98b441a2cccf (facts) has been started and output is visible here. 2026-01-01 00:29:01.961667 | orchestrator | 2026-01-01 00:29:01.961899 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-01 00:29:01.961919 | orchestrator | 2026-01-01 00:29:01.961931 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-01 00:29:01.961943 | orchestrator | Thursday 01 January 2026 00:28:20 +0000 (0:00:00.102) 0:00:00.102 ****** 2026-01-01 00:29:01.961955 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:01.961968 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:29:01.961981 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:01.961992 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:01.962003 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:29:01.962014 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:29:01.962086 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:01.962097 | orchestrator | 2026-01-01 00:29:01.962125 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-01 00:29:01.962138 | orchestrator | Thursday 01 January 2026 00:28:21 +0000 (0:00:01.435) 0:00:01.538 ****** 2026-01-01 00:29:01.962149 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:01.962172 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:29:01.962186 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:29:01.962198 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:01.962212 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:01.962225 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:01.962239 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:29:01.962252 | orchestrator | 2026-01-01 00:29:01.962265 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-01 00:29:01.962278 | orchestrator | 2026-01-01 00:29:01.962291 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-01 00:29:01.962331 | orchestrator | Thursday 01 January 2026 00:28:23 +0000 (0:00:01.261) 0:00:02.799 ****** 2026-01-01 00:29:01.962346 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:01.962359 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:01.962373 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:01.962387 | orchestrator | 2026-01-01 00:29:01.962401 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-01 00:29:01.962432 | orchestrator | Thursday 01 January 2026 00:28:23 +0000 (0:00:00.143) 0:00:02.942 ****** 2026-01-01 00:29:01.962445 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:01.962459 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:01.962471 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:01.962485 | orchestrator | 2026-01-01 00:29:01.962498 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-01 00:29:01.962511 | orchestrator | Thursday 01 January 2026 00:28:23 +0000 (0:00:00.218) 0:00:03.161 ****** 2026-01-01 00:29:01.962524 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:01.962537 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:01.962550 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:01.962561 | orchestrator | 2026-01-01 00:29:01.962572 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-01 00:29:01.962583 | orchestrator | Thursday 01 January 2026 00:28:23 +0000 (0:00:00.219) 0:00:03.381 ****** 2026-01-01 00:29:01.962595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:29:01.962609 | orchestrator | 2026-01-01 00:29:01.962620 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-01 00:29:01.962631 | orchestrator | Thursday 01 January 2026 00:28:23 +0000 (0:00:00.149) 0:00:03.531 ****** 2026-01-01 00:29:01.962642 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:01.962653 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:01.962664 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:01.962674 | orchestrator | 2026-01-01 00:29:01.962715 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-01 00:29:01.962736 | orchestrator | Thursday 01 January 2026 00:28:24 +0000 (0:00:00.507) 0:00:04.039 ****** 2026-01-01 00:29:01.962754 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:29:01.962774 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:29:01.962793 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:29:01.962809 | orchestrator | 2026-01-01 00:29:01.962820 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-01 00:29:01.962831 | orchestrator | Thursday 01 January 2026 00:28:24 +0000 (0:00:00.131) 0:00:04.170 ****** 2026-01-01 00:29:01.962842 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:01.962852 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:01.962863 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:01.962874 | orchestrator | 2026-01-01 00:29:01.962885 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-01 00:29:01.962896 | orchestrator | Thursday 01 January 2026 00:28:25 +0000 (0:00:01.083) 0:00:05.253 ****** 2026-01-01 00:29:01.962907 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:01.962918 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:01.962929 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:01.962939 | orchestrator | 2026-01-01 00:29:01.962951 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-01 00:29:01.962962 | orchestrator | Thursday 01 January 2026 00:28:26 +0000 (0:00:00.473) 0:00:05.727 ****** 2026-01-01 00:29:01.962972 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:01.962983 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:01.962994 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:01.963005 | orchestrator | 2026-01-01 00:29:01.963016 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-01 00:29:01.963027 | orchestrator | Thursday 01 January 2026 00:28:27 +0000 (0:00:01.096) 0:00:06.823 ****** 2026-01-01 00:29:01.963038 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:01.963058 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:01.963069 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:01.963080 | orchestrator | 2026-01-01 00:29:01.963091 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-01 00:29:01.963102 | orchestrator | Thursday 01 January 2026 00:28:44 +0000 (0:00:17.602) 0:00:24.426 ****** 2026-01-01 00:29:01.963112 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:29:01.963123 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:29:01.963135 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:29:01.963145 | orchestrator | 2026-01-01 00:29:01.963156 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-01 00:29:01.963187 | orchestrator | Thursday 01 January 2026 00:28:44 +0000 (0:00:00.102) 0:00:24.528 ****** 2026-01-01 00:29:01.963199 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:01.963210 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:01.963221 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:01.963232 | orchestrator | 2026-01-01 00:29:01.963243 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-01 00:29:01.963254 | orchestrator | Thursday 01 January 2026 00:28:52 +0000 (0:00:07.794) 0:00:32.323 ****** 2026-01-01 00:29:01.963265 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:01.963276 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:01.963287 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:01.963298 | orchestrator | 2026-01-01 00:29:01.963309 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-01 00:29:01.963320 | orchestrator | Thursday 01 January 2026 00:28:53 +0000 (0:00:00.557) 0:00:32.881 ****** 2026-01-01 00:29:01.963331 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-01 00:29:01.963342 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-01 00:29:01.963353 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-01 00:29:01.963364 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-01 00:29:01.963375 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-01 00:29:01.963386 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-01 00:29:01.963397 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-01 00:29:01.963408 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-01 00:29:01.963419 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-01 00:29:01.963430 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-01 00:29:01.963441 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-01 00:29:01.963452 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-01 00:29:01.963463 | orchestrator | 2026-01-01 00:29:01.963473 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-01 00:29:01.963484 | orchestrator | Thursday 01 January 2026 00:28:56 +0000 (0:00:03.641) 0:00:36.522 ****** 2026-01-01 00:29:01.963495 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:01.963506 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:01.963517 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:01.963528 | orchestrator | 2026-01-01 00:29:01.963539 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-01 00:29:01.963550 | orchestrator | 2026-01-01 00:29:01.963561 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:29:01.963572 | orchestrator | Thursday 01 January 2026 00:28:58 +0000 (0:00:01.321) 0:00:37.844 ****** 2026-01-01 00:29:01.963583 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:01.963594 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:01.963605 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:01.963616 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:01.963627 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:01.963645 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:01.963656 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:01.963666 | orchestrator | 2026-01-01 00:29:01.963677 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:29:01.963775 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:29:01.963798 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:29:01.963817 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:29:01.963836 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:29:01.963855 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:29:01.963873 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:29:01.963892 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:29:01.963910 | orchestrator | 2026-01-01 00:29:01.963929 | orchestrator | 2026-01-01 00:29:01.963947 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:29:01.963965 | orchestrator | Thursday 01 January 2026 00:29:01 +0000 (0:00:03.688) 0:00:41.533 ****** 2026-01-01 00:29:01.963982 | orchestrator | =============================================================================== 2026-01-01 00:29:01.964000 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.60s 2026-01-01 00:29:01.964017 | orchestrator | Install required packages (Debian) -------------------------------------- 7.79s 2026-01-01 00:29:01.964034 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.69s 2026-01-01 00:29:01.964052 | orchestrator | Copy fact files --------------------------------------------------------- 3.64s 2026-01-01 00:29:01.964070 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2026-01-01 00:29:01.964088 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.32s 2026-01-01 00:29:01.964118 | orchestrator | Copy fact file ---------------------------------------------------------- 1.26s 2026-01-01 00:29:02.245500 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2026-01-01 00:29:02.245624 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.08s 2026-01-01 00:29:02.245640 | orchestrator | Create custom facts directory ------------------------------------------- 0.56s 2026-01-01 00:29:02.245652 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.51s 2026-01-01 00:29:02.245663 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-01-01 00:29:02.245674 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-01-01 00:29:02.245743 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2026-01-01 00:29:02.245764 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-01-01 00:29:02.245784 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2026-01-01 00:29:02.245803 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-01-01 00:29:02.245822 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-01-01 00:29:02.601366 | orchestrator | + osism apply bootstrap 2026-01-01 00:29:14.801368 | orchestrator | 2026-01-01 00:29:14 | INFO  | Task bafce7f5-1ec2-4c51-916c-2403bc21a764 (bootstrap) was prepared for execution. 2026-01-01 00:29:14.801464 | orchestrator | 2026-01-01 00:29:14 | INFO  | It takes a moment until task bafce7f5-1ec2-4c51-916c-2403bc21a764 (bootstrap) has been started and output is visible here. 2026-01-01 00:29:31.156621 | orchestrator | 2026-01-01 00:29:31.156748 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-01 00:29:31.156765 | orchestrator | 2026-01-01 00:29:31.156791 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-01 00:29:31.156802 | orchestrator | Thursday 01 January 2026 00:29:18 +0000 (0:00:00.154) 0:00:00.154 ****** 2026-01-01 00:29:31.156813 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:31.156825 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:31.156836 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:31.156846 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:31.156856 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:31.156866 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:31.156875 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:31.156885 | orchestrator | 2026-01-01 00:29:31.156896 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-01 00:29:31.156905 | orchestrator | 2026-01-01 00:29:31.156915 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:29:31.156925 | orchestrator | Thursday 01 January 2026 00:29:19 +0000 (0:00:00.290) 0:00:00.444 ****** 2026-01-01 00:29:31.156935 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:31.156945 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:31.156955 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:31.156965 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:31.156975 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:31.156985 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:31.156994 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:31.157004 | orchestrator | 2026-01-01 00:29:31.157014 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-01 00:29:31.157023 | orchestrator | 2026-01-01 00:29:31.157033 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:29:31.157043 | orchestrator | Thursday 01 January 2026 00:29:22 +0000 (0:00:03.779) 0:00:04.224 ****** 2026-01-01 00:29:31.157053 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-01 00:29:31.157064 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-01 00:29:31.157073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-01 00:29:31.157083 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-01 00:29:31.157092 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-01 00:29:31.157102 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:29:31.157112 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-01 00:29:31.157121 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-01 00:29:31.157131 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-01 00:29:31.157141 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-01 00:29:31.157150 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-01 00:29:31.157160 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-01 00:29:31.157172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:29:31.157183 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-01 00:29:31.157195 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-01 00:29:31.157207 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-01 00:29:31.157218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-01 00:29:31.157229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:29:31.157241 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:29:31.157252 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-01 00:29:31.157287 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-01 00:29:31.157299 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-01 00:29:31.157310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-01 00:29:31.157321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-01 00:29:31.157333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-01 00:29:31.157344 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-01 00:29:31.157355 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:29:31.157367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-01 00:29:31.157378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-01 00:29:31.157388 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-01 00:29:31.157398 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:29:31.157407 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-01 00:29:31.157417 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-01 00:29:31.157426 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-01 00:29:31.157436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 00:29:31.157445 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-01 00:29:31.157455 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-01 00:29:31.157465 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-01 00:29:31.157474 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-01 00:29:31.157484 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:29:31.157493 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 00:29:31.157503 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-01 00:29:31.157513 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-01 00:29:31.157522 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-01 00:29:31.157532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 00:29:31.157542 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:29:31.157551 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-01 00:29:31.157578 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-01 00:29:31.157589 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-01 00:29:31.157598 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-01 00:29:31.157608 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-01 00:29:31.157617 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-01 00:29:31.157627 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-01 00:29:31.157636 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:29:31.157646 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-01 00:29:31.157655 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:29:31.157665 | orchestrator | 2026-01-01 00:29:31.157694 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-01 00:29:31.157704 | orchestrator | 2026-01-01 00:29:31.157714 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-01 00:29:31.157724 | orchestrator | Thursday 01 January 2026 00:29:23 +0000 (0:00:00.539) 0:00:04.763 ****** 2026-01-01 00:29:31.157733 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:31.157743 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:31.157752 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:31.157762 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:31.157771 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:31.157781 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:31.157790 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:31.157800 | orchestrator | 2026-01-01 00:29:31.157809 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-01 00:29:31.157827 | orchestrator | Thursday 01 January 2026 00:29:24 +0000 (0:00:01.278) 0:00:06.042 ****** 2026-01-01 00:29:31.157837 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:31.157846 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:31.157856 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:31.157866 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:31.157875 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:31.157885 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:31.157894 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:31.157904 | orchestrator | 2026-01-01 00:29:31.157914 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-01 00:29:31.157924 | orchestrator | Thursday 01 January 2026 00:29:26 +0000 (0:00:01.293) 0:00:07.336 ****** 2026-01-01 00:29:31.157934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:29:31.157946 | orchestrator | 2026-01-01 00:29:31.157956 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-01 00:29:31.157966 | orchestrator | Thursday 01 January 2026 00:29:26 +0000 (0:00:00.347) 0:00:07.683 ****** 2026-01-01 00:29:31.157975 | orchestrator | changed: [testbed-manager] 2026-01-01 00:29:31.157985 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:29:31.157995 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:31.158004 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:31.158073 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:29:31.158087 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:31.158097 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:29:31.158106 | orchestrator | 2026-01-01 00:29:31.158116 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-01 00:29:31.158126 | orchestrator | Thursday 01 January 2026 00:29:28 +0000 (0:00:02.172) 0:00:09.856 ****** 2026-01-01 00:29:31.158136 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:29:31.158147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:29:31.158159 | orchestrator | 2026-01-01 00:29:31.158169 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-01 00:29:31.158179 | orchestrator | Thursday 01 January 2026 00:29:28 +0000 (0:00:00.323) 0:00:10.180 ****** 2026-01-01 00:29:31.158188 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:31.158198 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:31.158208 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:31.158217 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:29:31.158227 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:29:31.158236 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:29:31.158246 | orchestrator | 2026-01-01 00:29:31.158256 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-01 00:29:31.158265 | orchestrator | Thursday 01 January 2026 00:29:29 +0000 (0:00:01.071) 0:00:11.251 ****** 2026-01-01 00:29:31.158275 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:29:31.158285 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:31.158294 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:29:31.158304 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:31.158321 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:29:31.158331 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:31.158341 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:29:31.158351 | orchestrator | 2026-01-01 00:29:31.158360 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-01 00:29:31.158370 | orchestrator | Thursday 01 January 2026 00:29:30 +0000 (0:00:00.640) 0:00:11.891 ****** 2026-01-01 00:29:31.158380 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:29:31.158389 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:29:31.158406 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:29:31.158415 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:29:31.158425 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:29:31.158435 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:29:31.158444 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:31.158454 | orchestrator | 2026-01-01 00:29:31.158464 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-01 00:29:31.158475 | orchestrator | Thursday 01 January 2026 00:29:31 +0000 (0:00:00.430) 0:00:12.322 ****** 2026-01-01 00:29:31.158485 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:29:31.158495 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:29:31.158511 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:29:43.951911 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:29:43.952024 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:29:43.952057 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:29:43.952069 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:29:43.952079 | orchestrator | 2026-01-01 00:29:43.952090 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-01 00:29:43.952101 | orchestrator | Thursday 01 January 2026 00:29:31 +0000 (0:00:00.252) 0:00:12.574 ****** 2026-01-01 00:29:43.952114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:29:43.952134 | orchestrator | 2026-01-01 00:29:43.952144 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-01 00:29:43.952155 | orchestrator | Thursday 01 January 2026 00:29:31 +0000 (0:00:00.323) 0:00:12.898 ****** 2026-01-01 00:29:43.952165 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:29:43.952174 | orchestrator | 2026-01-01 00:29:43.952184 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-01 00:29:43.952194 | orchestrator | Thursday 01 January 2026 00:29:31 +0000 (0:00:00.317) 0:00:13.215 ****** 2026-01-01 00:29:43.952204 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.952215 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:43.952225 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:43.952234 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:43.952244 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:43.952254 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:43.952264 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:43.952273 | orchestrator | 2026-01-01 00:29:43.952283 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-01 00:29:43.952293 | orchestrator | Thursday 01 January 2026 00:29:33 +0000 (0:00:01.574) 0:00:14.790 ****** 2026-01-01 00:29:43.952303 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:29:43.952313 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:29:43.952323 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:29:43.952333 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:29:43.952343 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:29:43.952353 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:29:43.952363 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:29:43.952372 | orchestrator | 2026-01-01 00:29:43.952382 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-01 00:29:43.952392 | orchestrator | Thursday 01 January 2026 00:29:33 +0000 (0:00:00.336) 0:00:15.126 ****** 2026-01-01 00:29:43.952402 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.952412 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:43.952422 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:43.952434 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:43.952446 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:43.952478 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:43.952490 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:43.952501 | orchestrator | 2026-01-01 00:29:43.952512 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-01 00:29:43.952524 | orchestrator | Thursday 01 January 2026 00:29:34 +0000 (0:00:00.547) 0:00:15.674 ****** 2026-01-01 00:29:43.952535 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:29:43.952547 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:29:43.952559 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:29:43.952569 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:29:43.952581 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:29:43.952593 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:29:43.952605 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:29:43.952616 | orchestrator | 2026-01-01 00:29:43.952628 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-01 00:29:43.952641 | orchestrator | Thursday 01 January 2026 00:29:34 +0000 (0:00:00.285) 0:00:15.959 ****** 2026-01-01 00:29:43.952652 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.952685 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:43.952697 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:43.952709 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:43.952719 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:29:43.952730 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:29:43.952742 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:29:43.952753 | orchestrator | 2026-01-01 00:29:43.952764 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-01 00:29:43.952776 | orchestrator | Thursday 01 January 2026 00:29:35 +0000 (0:00:00.563) 0:00:16.522 ****** 2026-01-01 00:29:43.952787 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.952797 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:29:43.952807 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:43.952816 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:29:43.952826 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:43.952835 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:43.952844 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:29:43.952854 | orchestrator | 2026-01-01 00:29:43.952864 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-01 00:29:43.952873 | orchestrator | Thursday 01 January 2026 00:29:36 +0000 (0:00:01.199) 0:00:17.722 ****** 2026-01-01 00:29:43.952883 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.952892 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:43.952902 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:43.952912 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:43.952922 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:43.952931 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:43.952941 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:43.952951 | orchestrator | 2026-01-01 00:29:43.952960 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-01 00:29:43.952970 | orchestrator | Thursday 01 January 2026 00:29:37 +0000 (0:00:01.281) 0:00:19.004 ****** 2026-01-01 00:29:43.953012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:29:43.953024 | orchestrator | 2026-01-01 00:29:43.953034 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-01 00:29:43.953044 | orchestrator | Thursday 01 January 2026 00:29:38 +0000 (0:00:00.333) 0:00:19.338 ****** 2026-01-01 00:29:43.953053 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:29:43.953063 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:29:43.953073 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:29:43.953082 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:29:43.953092 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:29:43.953109 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:29:43.953119 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:29:43.953128 | orchestrator | 2026-01-01 00:29:43.953138 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-01 00:29:43.953148 | orchestrator | Thursday 01 January 2026 00:29:39 +0000 (0:00:01.272) 0:00:20.610 ****** 2026-01-01 00:29:43.953158 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.953167 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:43.953177 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:43.953187 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:43.953196 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:43.953206 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:43.953215 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:43.953225 | orchestrator | 2026-01-01 00:29:43.953235 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-01 00:29:43.953244 | orchestrator | Thursday 01 January 2026 00:29:39 +0000 (0:00:00.221) 0:00:20.832 ****** 2026-01-01 00:29:43.953254 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.953264 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:43.953273 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:43.953283 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:43.953293 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:43.953302 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:43.953312 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:43.953322 | orchestrator | 2026-01-01 00:29:43.953332 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-01 00:29:43.953342 | orchestrator | Thursday 01 January 2026 00:29:39 +0000 (0:00:00.229) 0:00:21.062 ****** 2026-01-01 00:29:43.953351 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.953361 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:43.953371 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:43.953380 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:43.953390 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:43.953399 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:43.953409 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:43.953419 | orchestrator | 2026-01-01 00:29:43.953428 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-01 00:29:43.953438 | orchestrator | Thursday 01 January 2026 00:29:39 +0000 (0:00:00.249) 0:00:21.311 ****** 2026-01-01 00:29:43.953449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:29:43.953460 | orchestrator | 2026-01-01 00:29:43.953470 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-01 00:29:43.953479 | orchestrator | Thursday 01 January 2026 00:29:40 +0000 (0:00:00.299) 0:00:21.611 ****** 2026-01-01 00:29:43.953489 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:43.953499 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.953508 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:43.953518 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:43.953527 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:43.953537 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:43.953546 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:43.953556 | orchestrator | 2026-01-01 00:29:43.953566 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-01 00:29:43.953576 | orchestrator | Thursday 01 January 2026 00:29:40 +0000 (0:00:00.546) 0:00:22.157 ****** 2026-01-01 00:29:43.953585 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:29:43.953595 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:29:43.953605 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:29:43.953614 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:29:43.953624 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:29:43.953634 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:29:43.953643 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:29:43.953659 | orchestrator | 2026-01-01 00:29:43.953688 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-01 00:29:43.953698 | orchestrator | Thursday 01 January 2026 00:29:41 +0000 (0:00:00.255) 0:00:22.413 ****** 2026-01-01 00:29:43.953708 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.953717 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:43.953727 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:43.953737 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:43.953747 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:29:43.953756 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:29:43.953766 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:29:43.953775 | orchestrator | 2026-01-01 00:29:43.953785 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-01 00:29:43.953795 | orchestrator | Thursday 01 January 2026 00:29:42 +0000 (0:00:01.118) 0:00:23.531 ****** 2026-01-01 00:29:43.953804 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.953814 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:43.953824 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:43.953833 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:43.953843 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:29:43.953853 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:29:43.953862 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:29:43.953872 | orchestrator | 2026-01-01 00:29:43.953882 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-01 00:29:43.953891 | orchestrator | Thursday 01 January 2026 00:29:42 +0000 (0:00:00.594) 0:00:24.126 ****** 2026-01-01 00:29:43.953901 | orchestrator | ok: [testbed-manager] 2026-01-01 00:29:43.953911 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:29:43.953920 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:29:43.953930 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:29:43.953946 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:26.421231 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:26.421362 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:26.421382 | orchestrator | 2026-01-01 00:30:26.421396 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-01 00:30:26.421410 | orchestrator | Thursday 01 January 2026 00:29:43 +0000 (0:00:01.126) 0:00:25.253 ****** 2026-01-01 00:30:26.421423 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:26.421437 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:26.421450 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:26.421463 | orchestrator | changed: [testbed-manager] 2026-01-01 00:30:26.421477 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:26.421505 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:26.421518 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:26.421539 | orchestrator | 2026-01-01 00:30:26.421552 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-01 00:30:26.421565 | orchestrator | Thursday 01 January 2026 00:29:59 +0000 (0:00:15.953) 0:00:41.206 ****** 2026-01-01 00:30:26.421577 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:26.421586 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:26.421594 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:26.421601 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:26.421609 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:26.421617 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:26.421646 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:26.421653 | orchestrator | 2026-01-01 00:30:26.421661 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-01 00:30:26.421669 | orchestrator | Thursday 01 January 2026 00:30:00 +0000 (0:00:00.240) 0:00:41.446 ****** 2026-01-01 00:30:26.421676 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:26.421684 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:26.421691 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:26.421699 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:26.421706 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:26.421714 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:26.421721 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:26.421755 | orchestrator | 2026-01-01 00:30:26.421765 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-01 00:30:26.421774 | orchestrator | Thursday 01 January 2026 00:30:00 +0000 (0:00:00.262) 0:00:41.708 ****** 2026-01-01 00:30:26.421782 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:26.421791 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:26.421800 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:26.421808 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:26.421817 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:26.421825 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:26.421833 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:26.421842 | orchestrator | 2026-01-01 00:30:26.421851 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-01 00:30:26.421860 | orchestrator | Thursday 01 January 2026 00:30:00 +0000 (0:00:00.251) 0:00:41.960 ****** 2026-01-01 00:30:26.421872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:30:26.421883 | orchestrator | 2026-01-01 00:30:26.421892 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-01 00:30:26.421899 | orchestrator | Thursday 01 January 2026 00:30:00 +0000 (0:00:00.320) 0:00:42.281 ****** 2026-01-01 00:30:26.421906 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:26.421913 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:26.421921 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:26.421928 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:26.421935 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:26.421942 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:26.421949 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:26.421957 | orchestrator | 2026-01-01 00:30:26.421964 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-01 00:30:26.421971 | orchestrator | Thursday 01 January 2026 00:30:02 +0000 (0:00:01.654) 0:00:43.936 ****** 2026-01-01 00:30:26.421978 | orchestrator | changed: [testbed-manager] 2026-01-01 00:30:26.421986 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:30:26.422013 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:30:26.422078 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:26.422086 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:30:26.422094 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:26.422101 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:26.422108 | orchestrator | 2026-01-01 00:30:26.422116 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-01 00:30:26.422123 | orchestrator | Thursday 01 January 2026 00:30:03 +0000 (0:00:01.116) 0:00:45.052 ****** 2026-01-01 00:30:26.422131 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:26.422138 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:26.422145 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:26.422153 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:26.422160 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:26.422167 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:26.422174 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:26.422182 | orchestrator | 2026-01-01 00:30:26.422189 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-01 00:30:26.422196 | orchestrator | Thursday 01 January 2026 00:30:04 +0000 (0:00:00.846) 0:00:45.898 ****** 2026-01-01 00:30:26.422205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:30:26.422214 | orchestrator | 2026-01-01 00:30:26.422221 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-01 00:30:26.422229 | orchestrator | Thursday 01 January 2026 00:30:04 +0000 (0:00:00.315) 0:00:46.214 ****** 2026-01-01 00:30:26.422237 | orchestrator | changed: [testbed-manager] 2026-01-01 00:30:26.422251 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:30:26.422259 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:30:26.422266 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:26.422273 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:26.422281 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:26.422288 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:30:26.422295 | orchestrator | 2026-01-01 00:30:26.422326 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-01 00:30:26.422335 | orchestrator | Thursday 01 January 2026 00:30:06 +0000 (0:00:01.145) 0:00:47.359 ****** 2026-01-01 00:30:26.422342 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:30:26.422350 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:30:26.422357 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:30:26.422364 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:30:26.422371 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:30:26.422378 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:30:26.422385 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:30:26.422393 | orchestrator | 2026-01-01 00:30:26.422400 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-01 00:30:26.422408 | orchestrator | Thursday 01 January 2026 00:30:06 +0000 (0:00:00.247) 0:00:47.606 ****** 2026-01-01 00:30:26.422415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:30:26.422423 | orchestrator | 2026-01-01 00:30:26.422430 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-01 00:30:26.422437 | orchestrator | Thursday 01 January 2026 00:30:06 +0000 (0:00:00.321) 0:00:47.927 ****** 2026-01-01 00:30:26.422445 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:26.422452 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:26.422459 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:26.422467 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:26.422474 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:26.422481 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:26.422488 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:26.422496 | orchestrator | 2026-01-01 00:30:26.422503 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-01 00:30:26.422510 | orchestrator | Thursday 01 January 2026 00:30:08 +0000 (0:00:01.689) 0:00:49.616 ****** 2026-01-01 00:30:26.422517 | orchestrator | changed: [testbed-manager] 2026-01-01 00:30:26.422525 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:30:26.422532 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:26.422539 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:30:26.422546 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:26.422554 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:26.422561 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:30:26.422568 | orchestrator | 2026-01-01 00:30:26.422575 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-01 00:30:26.422583 | orchestrator | Thursday 01 January 2026 00:30:09 +0000 (0:00:01.317) 0:00:50.934 ****** 2026-01-01 00:30:26.422590 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:30:26.422597 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:30:26.422604 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:30:26.422612 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:26.422633 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:26.422641 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:26.422648 | orchestrator | changed: [testbed-manager] 2026-01-01 00:30:26.422656 | orchestrator | 2026-01-01 00:30:26.422663 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-01 00:30:26.422670 | orchestrator | Thursday 01 January 2026 00:30:23 +0000 (0:00:13.638) 0:01:04.573 ****** 2026-01-01 00:30:26.422677 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:26.422684 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:26.422698 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:26.422705 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:26.422712 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:26.422726 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:26.422739 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:26.422750 | orchestrator | 2026-01-01 00:30:26.422770 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-01 00:30:26.422785 | orchestrator | Thursday 01 January 2026 00:30:24 +0000 (0:00:01.398) 0:01:05.971 ****** 2026-01-01 00:30:26.422797 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:26.422808 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:26.422820 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:26.422831 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:26.422842 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:26.422853 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:26.422865 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:26.422877 | orchestrator | 2026-01-01 00:30:26.422888 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-01 00:30:26.422901 | orchestrator | Thursday 01 January 2026 00:30:25 +0000 (0:00:00.934) 0:01:06.905 ****** 2026-01-01 00:30:26.422913 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:26.422924 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:26.422937 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:26.422949 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:26.422961 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:26.422974 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:26.422987 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:26.422995 | orchestrator | 2026-01-01 00:30:26.423008 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-01 00:30:26.423020 | orchestrator | Thursday 01 January 2026 00:30:25 +0000 (0:00:00.242) 0:01:07.148 ****** 2026-01-01 00:30:26.423032 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:26.423043 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:26.423055 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:26.423068 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:26.423079 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:26.423092 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:26.423106 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:26.423114 | orchestrator | 2026-01-01 00:30:26.423121 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-01 00:30:26.423128 | orchestrator | Thursday 01 January 2026 00:30:26 +0000 (0:00:00.247) 0:01:07.395 ****** 2026-01-01 00:30:26.423136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:30:26.423144 | orchestrator | 2026-01-01 00:30:26.423159 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-01 00:32:47.545831 | orchestrator | Thursday 01 January 2026 00:30:26 +0000 (0:00:00.329) 0:01:07.725 ****** 2026-01-01 00:32:47.545983 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.546001 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.546013 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.546165 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.546181 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.546193 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.546204 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.546216 | orchestrator | 2026-01-01 00:32:47.546228 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-01 00:32:47.546240 | orchestrator | Thursday 01 January 2026 00:30:28 +0000 (0:00:01.732) 0:01:09.457 ****** 2026-01-01 00:32:47.546251 | orchestrator | changed: [testbed-manager] 2026-01-01 00:32:47.546264 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:47.546275 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:47.546286 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:47.546325 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:47.546339 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:47.546352 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:47.546365 | orchestrator | 2026-01-01 00:32:47.546378 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-01 00:32:47.546393 | orchestrator | Thursday 01 January 2026 00:30:28 +0000 (0:00:00.563) 0:01:10.021 ****** 2026-01-01 00:32:47.546405 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.546418 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.546431 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.546444 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.546484 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.546496 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.546509 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.546522 | orchestrator | 2026-01-01 00:32:47.546535 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-01 00:32:47.546548 | orchestrator | Thursday 01 January 2026 00:30:28 +0000 (0:00:00.227) 0:01:10.249 ****** 2026-01-01 00:32:47.546561 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.546573 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.546585 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.546598 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.546611 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.546623 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.546635 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.546648 | orchestrator | 2026-01-01 00:32:47.546662 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-01 00:32:47.546674 | orchestrator | Thursday 01 January 2026 00:30:30 +0000 (0:00:01.160) 0:01:11.409 ****** 2026-01-01 00:32:47.546685 | orchestrator | changed: [testbed-manager] 2026-01-01 00:32:47.546696 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:47.546707 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:47.546718 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:47.546729 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:47.546739 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:47.546751 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:47.546761 | orchestrator | 2026-01-01 00:32:47.546772 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-01 00:32:47.546783 | orchestrator | Thursday 01 January 2026 00:30:32 +0000 (0:00:02.059) 0:01:13.469 ****** 2026-01-01 00:32:47.546794 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.546805 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.546816 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.546827 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.546838 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.546849 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.546860 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.546871 | orchestrator | 2026-01-01 00:32:47.546882 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-01 00:32:47.546893 | orchestrator | Thursday 01 January 2026 00:30:34 +0000 (0:00:02.320) 0:01:15.789 ****** 2026-01-01 00:32:47.546918 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.546930 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.546941 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.546952 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.546963 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.546973 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.546985 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.547004 | orchestrator | 2026-01-01 00:32:47.547023 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-01 00:32:47.547042 | orchestrator | Thursday 01 January 2026 00:31:13 +0000 (0:00:38.725) 0:01:54.514 ****** 2026-01-01 00:32:47.547060 | orchestrator | changed: [testbed-manager] 2026-01-01 00:32:47.547080 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:47.547100 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:47.547159 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:47.547173 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:47.547192 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:47.547208 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:47.547228 | orchestrator | 2026-01-01 00:32:47.547248 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-01 00:32:47.547268 | orchestrator | Thursday 01 January 2026 00:32:31 +0000 (0:01:18.248) 0:03:12.762 ****** 2026-01-01 00:32:47.547279 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.547290 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.547301 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.547312 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.547322 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.547333 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.547343 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.547354 | orchestrator | 2026-01-01 00:32:47.547365 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-01 00:32:47.547376 | orchestrator | Thursday 01 January 2026 00:32:33 +0000 (0:00:01.779) 0:03:14.542 ****** 2026-01-01 00:32:47.547386 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.547397 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.547408 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.547419 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.547429 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.547440 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.547472 | orchestrator | changed: [testbed-manager] 2026-01-01 00:32:47.547483 | orchestrator | 2026-01-01 00:32:47.547494 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-01 00:32:47.547505 | orchestrator | Thursday 01 January 2026 00:32:46 +0000 (0:00:13.063) 0:03:27.606 ****** 2026-01-01 00:32:47.547555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-01 00:32:47.547574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-01 00:32:47.547589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-01 00:32:47.547602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-01 00:32:47.547613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-01 00:32:47.547634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-01 00:32:47.547645 | orchestrator | 2026-01-01 00:32:47.547660 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-01 00:32:47.547672 | orchestrator | Thursday 01 January 2026 00:32:46 +0000 (0:00:00.401) 0:03:28.007 ****** 2026-01-01 00:32:47.547683 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-01 00:32:47.547694 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-01 00:32:47.547705 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:32:47.547716 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:32:47.547727 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-01 00:32:47.547738 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:32:47.547749 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-01 00:32:47.547760 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:32:47.547771 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 00:32:47.547782 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 00:32:47.547792 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 00:32:47.547803 | orchestrator | 2026-01-01 00:32:47.547814 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-01 00:32:47.547825 | orchestrator | Thursday 01 January 2026 00:32:47 +0000 (0:00:00.758) 0:03:28.765 ****** 2026-01-01 00:32:47.547843 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-01 00:32:47.547857 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-01 00:32:47.547868 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-01 00:32:47.547879 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-01 00:32:47.547890 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-01 00:32:47.547913 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-01 00:32:53.432824 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-01 00:32:53.432941 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-01 00:32:53.432957 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-01 00:32:53.432969 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-01 00:32:53.432982 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-01 00:32:53.432993 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-01 00:32:53.433004 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-01 00:32:53.433015 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-01 00:32:53.433028 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:32:53.433042 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-01 00:32:53.433054 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-01 00:32:53.433088 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-01 00:32:53.433101 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-01 00:32:53.433112 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-01 00:32:53.433123 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-01 00:32:53.433134 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-01 00:32:53.433144 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-01 00:32:53.433155 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-01 00:32:53.433166 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-01 00:32:53.433177 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-01 00:32:53.433188 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-01 00:32:53.433199 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:32:53.433210 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-01 00:32:53.433221 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-01 00:32:53.433232 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-01 00:32:53.433242 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-01 00:32:53.433254 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:32:53.433264 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-01 00:32:53.433275 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-01 00:32:53.433286 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-01 00:32:53.433297 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-01 00:32:53.433308 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-01 00:32:53.433318 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-01 00:32:53.433329 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-01 00:32:53.433343 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-01 00:32:53.433357 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-01 00:32:53.433370 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-01 00:32:53.433382 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:32:53.433395 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-01 00:32:53.433408 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-01 00:32:53.433421 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-01 00:32:53.433434 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-01 00:32:53.433477 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-01 00:32:53.433523 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-01 00:32:53.433546 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-01 00:32:53.433560 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-01 00:32:53.433572 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-01 00:32:53.433584 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-01 00:32:53.433598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-01 00:32:53.433611 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-01 00:32:53.433624 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-01 00:32:53.433638 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-01 00:32:53.433651 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-01 00:32:53.433663 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-01 00:32:53.433677 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-01 00:32:53.433689 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-01 00:32:53.433700 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-01 00:32:53.433711 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-01 00:32:53.433722 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-01 00:32:53.433733 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-01 00:32:53.433743 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-01 00:32:53.433754 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-01 00:32:53.433765 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-01 00:32:53.433776 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-01 00:32:53.433787 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-01 00:32:53.433798 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-01 00:32:53.433808 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-01 00:32:53.433819 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-01 00:32:53.433830 | orchestrator | 2026-01-01 00:32:53.433842 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-01 00:32:53.433853 | orchestrator | Thursday 01 January 2026 00:32:52 +0000 (0:00:04.881) 0:03:33.647 ****** 2026-01-01 00:32:53.433864 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:32:53.433875 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:32:53.433886 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:32:53.433897 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:32:53.433908 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:32:53.433918 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:32:53.433929 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:32:53.433940 | orchestrator | 2026-01-01 00:32:53.433957 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-01 00:32:53.433968 | orchestrator | Thursday 01 January 2026 00:32:52 +0000 (0:00:00.591) 0:03:34.239 ****** 2026-01-01 00:32:53.433979 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:32:53.433990 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:32:53.434001 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:32:53.434012 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:32:53.434083 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:32:53.434095 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:32:53.434106 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:32:53.434117 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:32:53.434128 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:32:53.434144 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:32:53.434163 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:33:07.022281 | orchestrator | 2026-01-01 00:33:07.022396 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-01 00:33:07.022413 | orchestrator | Thursday 01 January 2026 00:32:53 +0000 (0:00:00.497) 0:03:34.737 ****** 2026-01-01 00:33:07.022455 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:33:07.022471 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:33:07.022483 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:33:07.022497 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:33:07.022508 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:33:07.022520 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:33:07.022531 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:33:07.022543 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:33:07.022554 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:33:07.022565 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:33:07.022576 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:33:07.022588 | orchestrator | 2026-01-01 00:33:07.022599 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-01 00:33:07.022610 | orchestrator | Thursday 01 January 2026 00:32:54 +0000 (0:00:00.651) 0:03:35.388 ****** 2026-01-01 00:33:07.022621 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-01 00:33:07.022633 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:33:07.022644 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-01 00:33:07.022655 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:33:07.022666 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-01 00:33:07.022677 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:33:07.022688 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-01 00:33:07.022699 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:33:07.022711 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-01 00:33:07.022747 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-01 00:33:07.022758 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-01 00:33:07.022770 | orchestrator | 2026-01-01 00:33:07.022781 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-01 00:33:07.022792 | orchestrator | Thursday 01 January 2026 00:32:54 +0000 (0:00:00.620) 0:03:36.008 ****** 2026-01-01 00:33:07.022806 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:33:07.022819 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:33:07.022832 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:33:07.022846 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:33:07.022860 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:33:07.022873 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:33:07.022886 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:33:07.022900 | orchestrator | 2026-01-01 00:33:07.022913 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-01 00:33:07.022927 | orchestrator | Thursday 01 January 2026 00:32:55 +0000 (0:00:00.368) 0:03:36.376 ****** 2026-01-01 00:33:07.022941 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:33:07.022955 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:33:07.022968 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:33:07.022981 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:33:07.022994 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:33:07.023008 | orchestrator | ok: [testbed-manager] 2026-01-01 00:33:07.023020 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:33:07.023034 | orchestrator | 2026-01-01 00:33:07.023047 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-01 00:33:07.023061 | orchestrator | Thursday 01 January 2026 00:33:00 +0000 (0:00:05.812) 0:03:42.189 ****** 2026-01-01 00:33:07.023075 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-01 00:33:07.023088 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-01 00:33:07.023102 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:33:07.023116 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-01 00:33:07.023129 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:33:07.023142 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-01 00:33:07.023155 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:33:07.023166 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:33:07.023177 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-01 00:33:07.023188 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-01 00:33:07.023198 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:33:07.023209 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:33:07.023220 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-01 00:33:07.023231 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:33:07.023242 | orchestrator | 2026-01-01 00:33:07.023253 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-01 00:33:07.023264 | orchestrator | Thursday 01 January 2026 00:33:01 +0000 (0:00:00.337) 0:03:42.526 ****** 2026-01-01 00:33:07.023275 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-01 00:33:07.023286 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-01 00:33:07.023297 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-01 00:33:07.023327 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-01 00:33:07.023339 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-01 00:33:07.023350 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-01 00:33:07.023361 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-01 00:33:07.023372 | orchestrator | 2026-01-01 00:33:07.023383 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-01 00:33:07.023394 | orchestrator | Thursday 01 January 2026 00:33:02 +0000 (0:00:01.236) 0:03:43.763 ****** 2026-01-01 00:33:07.023407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:33:07.023454 | orchestrator | 2026-01-01 00:33:07.023471 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-01 00:33:07.023482 | orchestrator | Thursday 01 January 2026 00:33:02 +0000 (0:00:00.450) 0:03:44.213 ****** 2026-01-01 00:33:07.023493 | orchestrator | ok: [testbed-manager] 2026-01-01 00:33:07.023505 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:33:07.023516 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:33:07.023526 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:33:07.023537 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:33:07.023548 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:33:07.023559 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:33:07.023570 | orchestrator | 2026-01-01 00:33:07.023581 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-01 00:33:07.023592 | orchestrator | Thursday 01 January 2026 00:33:04 +0000 (0:00:01.250) 0:03:45.464 ****** 2026-01-01 00:33:07.023603 | orchestrator | ok: [testbed-manager] 2026-01-01 00:33:07.023614 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:33:07.023625 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:33:07.023636 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:33:07.023647 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:33:07.023658 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:33:07.023669 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:33:07.023680 | orchestrator | 2026-01-01 00:33:07.023691 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-01 00:33:07.023702 | orchestrator | Thursday 01 January 2026 00:33:04 +0000 (0:00:00.596) 0:03:46.061 ****** 2026-01-01 00:33:07.023713 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:07.023724 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:07.023735 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:07.023746 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:07.023757 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:07.023768 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:07.023779 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:07.023790 | orchestrator | 2026-01-01 00:33:07.023801 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-01 00:33:07.023812 | orchestrator | Thursday 01 January 2026 00:33:05 +0000 (0:00:00.679) 0:03:46.740 ****** 2026-01-01 00:33:07.023841 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:33:07.023852 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:33:07.023863 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:33:07.023874 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:33:07.023885 | orchestrator | ok: [testbed-manager] 2026-01-01 00:33:07.023896 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:33:07.023907 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:33:07.023918 | orchestrator | 2026-01-01 00:33:07.023929 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-01 00:33:07.023940 | orchestrator | Thursday 01 January 2026 00:33:06 +0000 (0:00:00.624) 0:03:47.364 ****** 2026-01-01 00:33:07.023955 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225900.0774407, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:07.023970 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225927.848794, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:07.023996 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225930.9364111, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:07.024031 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225926.1993601, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:12.065889 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225927.4962242, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:12.066008 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225923.7653463, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:12.066100 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225930.8015175, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:12.066111 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:12.066119 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:12.066153 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:12.066179 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:12.066207 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:12.066216 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:12.066224 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:33:12.066233 | orchestrator | 2026-01-01 00:33:12.066243 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-01 00:33:12.066253 | orchestrator | Thursday 01 January 2026 00:33:07 +0000 (0:00:00.960) 0:03:48.325 ****** 2026-01-01 00:33:12.066262 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:12.066272 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:12.066280 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:12.066288 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:12.066296 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:12.066304 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:12.066312 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:12.066320 | orchestrator | 2026-01-01 00:33:12.066328 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-01 00:33:12.066347 | orchestrator | Thursday 01 January 2026 00:33:08 +0000 (0:00:01.111) 0:03:49.436 ****** 2026-01-01 00:33:12.066355 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:12.066363 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:12.066371 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:12.066379 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:12.066387 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:12.066395 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:12.066403 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:12.066410 | orchestrator | 2026-01-01 00:33:12.066418 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-01 00:33:12.066454 | orchestrator | Thursday 01 January 2026 00:33:09 +0000 (0:00:01.188) 0:03:50.625 ****** 2026-01-01 00:33:12.066463 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:12.066471 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:12.066479 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:12.066487 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:12.066495 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:12.066502 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:12.066510 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:12.066518 | orchestrator | 2026-01-01 00:33:12.066526 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-01 00:33:12.066534 | orchestrator | Thursday 01 January 2026 00:33:10 +0000 (0:00:01.277) 0:03:51.902 ****** 2026-01-01 00:33:12.066541 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:33:12.066550 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:33:12.066558 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:33:12.066566 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:33:12.066573 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:33:12.066581 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:33:12.066589 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:33:12.066597 | orchestrator | 2026-01-01 00:33:12.066605 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-01 00:33:12.066613 | orchestrator | Thursday 01 January 2026 00:33:10 +0000 (0:00:00.259) 0:03:52.162 ****** 2026-01-01 00:33:12.066622 | orchestrator | ok: [testbed-manager] 2026-01-01 00:33:12.066631 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:33:12.066639 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:33:12.066658 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:33:12.066667 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:33:12.066679 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:33:12.066698 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:33:12.066713 | orchestrator | 2026-01-01 00:33:12.066726 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-01 00:33:12.066740 | orchestrator | Thursday 01 January 2026 00:33:11 +0000 (0:00:00.763) 0:03:52.926 ****** 2026-01-01 00:33:12.066756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:33:12.066772 | orchestrator | 2026-01-01 00:33:12.066786 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-01 00:33:12.066806 | orchestrator | Thursday 01 January 2026 00:33:12 +0000 (0:00:00.445) 0:03:53.371 ****** 2026-01-01 00:34:28.390355 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:28.390489 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:34:28.390501 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:34:28.390508 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:34:28.390515 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:34:28.390521 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:34:28.390528 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:34:28.390535 | orchestrator | 2026-01-01 00:34:28.390542 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-01 00:34:28.390572 | orchestrator | Thursday 01 January 2026 00:33:19 +0000 (0:00:07.494) 0:04:00.866 ****** 2026-01-01 00:34:28.390579 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:28.390585 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:28.390592 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:28.390598 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:28.390604 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:28.390610 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:28.390617 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:28.390623 | orchestrator | 2026-01-01 00:34:28.390629 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-01 00:34:28.390636 | orchestrator | Thursday 01 January 2026 00:33:20 +0000 (0:00:01.252) 0:04:02.119 ****** 2026-01-01 00:34:28.390642 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:28.390648 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:28.390655 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:28.390661 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:28.390667 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:28.390673 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:28.390679 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:28.390685 | orchestrator | 2026-01-01 00:34:28.390692 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-01 00:34:28.390698 | orchestrator | Thursday 01 January 2026 00:33:21 +0000 (0:00:01.129) 0:04:03.248 ****** 2026-01-01 00:34:28.390704 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:28.390711 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:28.390717 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:28.390723 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:28.390729 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:28.390735 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:28.390741 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:28.390747 | orchestrator | 2026-01-01 00:34:28.390754 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-01 00:34:28.390761 | orchestrator | Thursday 01 January 2026 00:33:22 +0000 (0:00:00.332) 0:04:03.580 ****** 2026-01-01 00:34:28.390767 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:28.390774 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:28.390780 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:28.390786 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:28.390792 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:28.390798 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:28.390804 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:28.390810 | orchestrator | 2026-01-01 00:34:28.390816 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-01 00:34:28.390822 | orchestrator | Thursday 01 January 2026 00:33:22 +0000 (0:00:00.342) 0:04:03.922 ****** 2026-01-01 00:34:28.390829 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:28.390835 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:28.390841 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:28.390847 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:28.390853 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:28.390859 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:28.390865 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:28.390871 | orchestrator | 2026-01-01 00:34:28.390877 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-01 00:34:28.390883 | orchestrator | Thursday 01 January 2026 00:33:22 +0000 (0:00:00.349) 0:04:04.272 ****** 2026-01-01 00:34:28.390889 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:28.390896 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:28.390904 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:28.390911 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:28.390918 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:28.390925 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:28.390932 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:28.390939 | orchestrator | 2026-01-01 00:34:28.390946 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-01 00:34:28.390959 | orchestrator | Thursday 01 January 2026 00:33:28 +0000 (0:00:05.622) 0:04:09.895 ****** 2026-01-01 00:34:28.390970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:34:28.390980 | orchestrator | 2026-01-01 00:34:28.390987 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-01 00:34:28.390994 | orchestrator | Thursday 01 January 2026 00:33:29 +0000 (0:00:00.424) 0:04:10.319 ****** 2026-01-01 00:34:28.391001 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-01 00:34:28.391008 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-01 00:34:28.391016 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:34:28.391024 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-01 00:34:28.391031 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-01 00:34:28.391038 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:34:28.391046 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-01 00:34:28.391053 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-01 00:34:28.391061 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-01 00:34:28.391068 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-01 00:34:28.391075 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:34:28.391083 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-01 00:34:28.391091 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-01 00:34:28.391098 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:34:28.391105 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:34:28.391113 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-01 00:34:28.391133 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-01 00:34:28.391141 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:34:28.391148 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-01 00:34:28.391155 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-01 00:34:28.391162 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:34:28.391170 | orchestrator | 2026-01-01 00:34:28.391177 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-01 00:34:28.391184 | orchestrator | Thursday 01 January 2026 00:33:29 +0000 (0:00:00.359) 0:04:10.679 ****** 2026-01-01 00:34:28.391191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:34:28.391198 | orchestrator | 2026-01-01 00:34:28.391206 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-01 00:34:28.391213 | orchestrator | Thursday 01 January 2026 00:33:29 +0000 (0:00:00.464) 0:04:11.143 ****** 2026-01-01 00:34:28.391220 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-01 00:34:28.391227 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:34:28.391235 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-01 00:34:28.391242 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:34:28.391249 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-01 00:34:28.391256 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:34:28.391264 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-01 00:34:28.391271 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:34:28.391277 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-01 00:34:28.391283 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-01 00:34:28.391289 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:34:28.391300 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:34:28.391307 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-01 00:34:28.391313 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:34:28.391319 | orchestrator | 2026-01-01 00:34:28.391325 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-01 00:34:28.391331 | orchestrator | Thursday 01 January 2026 00:33:30 +0000 (0:00:00.333) 0:04:11.477 ****** 2026-01-01 00:34:28.391351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:34:28.391358 | orchestrator | 2026-01-01 00:34:28.391364 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-01 00:34:28.391370 | orchestrator | Thursday 01 January 2026 00:33:30 +0000 (0:00:00.434) 0:04:11.912 ****** 2026-01-01 00:34:28.391376 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:34:28.391397 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:34:28.391405 | orchestrator | changed: [testbed-manager] 2026-01-01 00:34:28.391411 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:34:28.391417 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:34:28.391423 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:34:28.391430 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:34:28.391436 | orchestrator | 2026-01-01 00:34:28.391442 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-01 00:34:28.391448 | orchestrator | Thursday 01 January 2026 00:34:04 +0000 (0:00:34.337) 0:04:46.249 ****** 2026-01-01 00:34:28.391454 | orchestrator | changed: [testbed-manager] 2026-01-01 00:34:28.391460 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:34:28.391466 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:34:28.391472 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:34:28.391479 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:34:28.391485 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:34:28.391491 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:34:28.391497 | orchestrator | 2026-01-01 00:34:28.391503 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-01 00:34:28.391509 | orchestrator | Thursday 01 January 2026 00:34:13 +0000 (0:00:08.180) 0:04:54.430 ****** 2026-01-01 00:34:28.391515 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:34:28.391522 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:34:28.391528 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:34:28.391534 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:34:28.391540 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:34:28.391546 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:34:28.391552 | orchestrator | changed: [testbed-manager] 2026-01-01 00:34:28.391559 | orchestrator | 2026-01-01 00:34:28.391565 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-01 00:34:28.391571 | orchestrator | Thursday 01 January 2026 00:34:20 +0000 (0:00:07.589) 0:05:02.020 ****** 2026-01-01 00:34:28.391577 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:28.391587 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:28.391594 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:28.391600 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:28.391606 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:28.391612 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:28.391618 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:28.391624 | orchestrator | 2026-01-01 00:34:28.391630 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-01 00:34:28.391637 | orchestrator | Thursday 01 January 2026 00:34:22 +0000 (0:00:01.621) 0:05:03.641 ****** 2026-01-01 00:34:28.391643 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:34:28.391649 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:34:28.391655 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:34:28.391662 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:34:28.391672 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:34:28.391678 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:34:28.391685 | orchestrator | changed: [testbed-manager] 2026-01-01 00:34:28.391691 | orchestrator | 2026-01-01 00:34:28.391701 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-01 00:34:39.979471 | orchestrator | Thursday 01 January 2026 00:34:28 +0000 (0:00:06.046) 0:05:09.688 ****** 2026-01-01 00:34:39.979616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:34:39.979637 | orchestrator | 2026-01-01 00:34:39.979650 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-01 00:34:39.979662 | orchestrator | Thursday 01 January 2026 00:34:28 +0000 (0:00:00.436) 0:05:10.125 ****** 2026-01-01 00:34:39.979675 | orchestrator | changed: [testbed-manager] 2026-01-01 00:34:39.979701 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:34:39.979724 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:34:39.979736 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:34:39.979747 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:34:39.979758 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:34:39.979769 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:34:39.979780 | orchestrator | 2026-01-01 00:34:39.979791 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-01 00:34:39.979802 | orchestrator | Thursday 01 January 2026 00:34:29 +0000 (0:00:00.781) 0:05:10.906 ****** 2026-01-01 00:34:39.979813 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:39.979826 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:39.979837 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:39.979848 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:39.979859 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:39.979870 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:39.979880 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:39.979891 | orchestrator | 2026-01-01 00:34:39.979902 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-01 00:34:39.979913 | orchestrator | Thursday 01 January 2026 00:34:31 +0000 (0:00:01.681) 0:05:12.588 ****** 2026-01-01 00:34:39.979924 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:34:39.979935 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:34:39.979949 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:34:39.979963 | orchestrator | changed: [testbed-manager] 2026-01-01 00:34:39.979976 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:34:39.979988 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:34:39.980001 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:34:39.980014 | orchestrator | 2026-01-01 00:34:39.980027 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-01 00:34:39.980040 | orchestrator | Thursday 01 January 2026 00:34:32 +0000 (0:00:00.835) 0:05:13.423 ****** 2026-01-01 00:34:39.980053 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:34:39.980066 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:34:39.980079 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:34:39.980092 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:34:39.980105 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:34:39.980117 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:34:39.980130 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:34:39.980143 | orchestrator | 2026-01-01 00:34:39.980155 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-01 00:34:39.980168 | orchestrator | Thursday 01 January 2026 00:34:32 +0000 (0:00:00.326) 0:05:13.750 ****** 2026-01-01 00:34:39.980181 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:34:39.980193 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:34:39.980206 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:34:39.980219 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:34:39.980232 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:34:39.980274 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:34:39.980287 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:34:39.980301 | orchestrator | 2026-01-01 00:34:39.980314 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-01 00:34:39.980327 | orchestrator | Thursday 01 January 2026 00:34:32 +0000 (0:00:00.455) 0:05:14.205 ****** 2026-01-01 00:34:39.980338 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:39.980349 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:39.980360 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:39.980371 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:39.980382 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:39.980393 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:39.980421 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:39.980432 | orchestrator | 2026-01-01 00:34:39.980443 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-01 00:34:39.980454 | orchestrator | Thursday 01 January 2026 00:34:33 +0000 (0:00:00.326) 0:05:14.531 ****** 2026-01-01 00:34:39.980465 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:34:39.980476 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:34:39.980487 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:34:39.980497 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:34:39.980508 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:34:39.980519 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:34:39.980530 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:34:39.980541 | orchestrator | 2026-01-01 00:34:39.980552 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-01 00:34:39.980564 | orchestrator | Thursday 01 January 2026 00:34:33 +0000 (0:00:00.312) 0:05:14.844 ****** 2026-01-01 00:34:39.980593 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:39.980604 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:39.980615 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:39.980626 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:39.980637 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:39.980648 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:39.980659 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:39.980669 | orchestrator | 2026-01-01 00:34:39.980680 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-01 00:34:39.980691 | orchestrator | Thursday 01 January 2026 00:34:33 +0000 (0:00:00.353) 0:05:15.198 ****** 2026-01-01 00:34:39.980702 | orchestrator | ok: [testbed-manager] =>  2026-01-01 00:34:39.980713 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:34:39.980724 | orchestrator | ok: [testbed-node-3] =>  2026-01-01 00:34:39.980735 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:34:39.980746 | orchestrator | ok: [testbed-node-4] =>  2026-01-01 00:34:39.980757 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:34:39.980768 | orchestrator | ok: [testbed-node-5] =>  2026-01-01 00:34:39.980779 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:34:39.980810 | orchestrator | ok: [testbed-node-0] =>  2026-01-01 00:34:39.980821 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:34:39.980832 | orchestrator | ok: [testbed-node-1] =>  2026-01-01 00:34:39.980843 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:34:39.980854 | orchestrator | ok: [testbed-node-2] =>  2026-01-01 00:34:39.980865 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:34:39.980876 | orchestrator | 2026-01-01 00:34:39.980887 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-01 00:34:39.980898 | orchestrator | Thursday 01 January 2026 00:34:34 +0000 (0:00:00.302) 0:05:15.500 ****** 2026-01-01 00:34:39.980909 | orchestrator | ok: [testbed-manager] =>  2026-01-01 00:34:39.980919 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:34:39.980930 | orchestrator | ok: [testbed-node-3] =>  2026-01-01 00:34:39.980941 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:34:39.980952 | orchestrator | ok: [testbed-node-4] =>  2026-01-01 00:34:39.980962 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:34:39.980973 | orchestrator | ok: [testbed-node-5] =>  2026-01-01 00:34:39.980992 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:34:39.981003 | orchestrator | ok: [testbed-node-0] =>  2026-01-01 00:34:39.981014 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:34:39.981025 | orchestrator | ok: [testbed-node-1] =>  2026-01-01 00:34:39.981036 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:34:39.981046 | orchestrator | ok: [testbed-node-2] =>  2026-01-01 00:34:39.981057 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:34:39.981068 | orchestrator | 2026-01-01 00:34:39.981079 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-01 00:34:39.981090 | orchestrator | Thursday 01 January 2026 00:34:34 +0000 (0:00:00.326) 0:05:15.827 ****** 2026-01-01 00:34:39.981101 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:34:39.981111 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:34:39.981122 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:34:39.981133 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:34:39.981144 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:34:39.981154 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:34:39.981165 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:34:39.981176 | orchestrator | 2026-01-01 00:34:39.981187 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-01 00:34:39.981198 | orchestrator | Thursday 01 January 2026 00:34:34 +0000 (0:00:00.342) 0:05:16.169 ****** 2026-01-01 00:34:39.981208 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:34:39.981219 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:34:39.981230 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:34:39.981241 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:34:39.981251 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:34:39.981262 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:34:39.981273 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:34:39.981284 | orchestrator | 2026-01-01 00:34:39.981295 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-01 00:34:39.981305 | orchestrator | Thursday 01 January 2026 00:34:35 +0000 (0:00:00.302) 0:05:16.471 ****** 2026-01-01 00:34:39.981318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:34:39.981331 | orchestrator | 2026-01-01 00:34:39.981342 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-01 00:34:39.981353 | orchestrator | Thursday 01 January 2026 00:34:35 +0000 (0:00:00.469) 0:05:16.941 ****** 2026-01-01 00:34:39.981364 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:39.981375 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:39.981386 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:39.981411 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:39.981423 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:39.981434 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:39.981445 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:39.981455 | orchestrator | 2026-01-01 00:34:39.981466 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-01 00:34:39.981477 | orchestrator | Thursday 01 January 2026 00:34:36 +0000 (0:00:01.008) 0:05:17.950 ****** 2026-01-01 00:34:39.981488 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:39.981499 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:39.981509 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:39.981520 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:39.981530 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:39.981541 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:39.981552 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:39.981563 | orchestrator | 2026-01-01 00:34:39.981574 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-01 00:34:39.981586 | orchestrator | Thursday 01 January 2026 00:34:39 +0000 (0:00:02.933) 0:05:20.883 ****** 2026-01-01 00:34:39.981604 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-01 00:34:39.981615 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-01 00:34:39.981626 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-01 00:34:39.981637 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-01 00:34:39.981653 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-01 00:34:39.981664 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:34:39.981675 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-01 00:34:39.981686 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-01 00:34:39.981696 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-01 00:34:39.981707 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-01 00:34:39.981718 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:34:39.981729 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-01 00:34:39.981740 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-01 00:34:39.981750 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-01 00:34:39.981761 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:34:39.981773 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-01 00:34:39.981790 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-01 00:35:38.275814 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-01 00:35:38.275950 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:38.275967 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-01 00:35:38.275979 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-01 00:35:38.275989 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-01 00:35:38.275999 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:38.276009 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:38.276019 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-01 00:35:38.276029 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-01 00:35:38.276039 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-01 00:35:38.276049 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:38.276059 | orchestrator | 2026-01-01 00:35:38.276070 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-01 00:35:38.276082 | orchestrator | Thursday 01 January 2026 00:34:40 +0000 (0:00:00.627) 0:05:21.510 ****** 2026-01-01 00:35:38.276092 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:38.276103 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:38.276113 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:38.276123 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:38.276133 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:38.276142 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:38.276152 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:38.276165 | orchestrator | 2026-01-01 00:35:38.276183 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-01 00:35:38.276200 | orchestrator | Thursday 01 January 2026 00:34:46 +0000 (0:00:06.328) 0:05:27.839 ****** 2026-01-01 00:35:38.276217 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:38.276234 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:38.276250 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:38.276267 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:38.276284 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:38.276302 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:38.276321 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:38.276341 | orchestrator | 2026-01-01 00:35:38.276355 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-01 00:35:38.276367 | orchestrator | Thursday 01 January 2026 00:34:47 +0000 (0:00:01.025) 0:05:28.865 ****** 2026-01-01 00:35:38.276379 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:38.276422 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:38.276434 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:38.276445 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:38.276456 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:38.276496 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:38.276509 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:38.276520 | orchestrator | 2026-01-01 00:35:38.276531 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-01 00:35:38.276543 | orchestrator | Thursday 01 January 2026 00:34:55 +0000 (0:00:07.904) 0:05:36.769 ****** 2026-01-01 00:35:38.276555 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:38.276567 | orchestrator | changed: [testbed-manager] 2026-01-01 00:35:38.276578 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:38.276589 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:38.276600 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:38.276612 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:38.276623 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:38.276632 | orchestrator | 2026-01-01 00:35:38.276642 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-01 00:35:38.276652 | orchestrator | Thursday 01 January 2026 00:34:58 +0000 (0:00:03.371) 0:05:40.141 ****** 2026-01-01 00:35:38.276661 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:38.276671 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:38.276681 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:38.276690 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:38.276700 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:38.276709 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:38.276719 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:38.276728 | orchestrator | 2026-01-01 00:35:38.276738 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-01 00:35:38.276747 | orchestrator | Thursday 01 January 2026 00:35:00 +0000 (0:00:01.329) 0:05:41.471 ****** 2026-01-01 00:35:38.276757 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:38.276767 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:38.276776 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:38.276786 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:38.276795 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:38.276805 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:38.276814 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:38.276824 | orchestrator | 2026-01-01 00:35:38.276834 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-01 00:35:38.276843 | orchestrator | Thursday 01 January 2026 00:35:01 +0000 (0:00:01.562) 0:05:43.033 ****** 2026-01-01 00:35:38.276853 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:38.276863 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:38.276872 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:38.276900 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:38.276910 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:38.276920 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:38.276930 | orchestrator | changed: [testbed-manager] 2026-01-01 00:35:38.276939 | orchestrator | 2026-01-01 00:35:38.276949 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-01 00:35:38.276959 | orchestrator | Thursday 01 January 2026 00:35:02 +0000 (0:00:00.620) 0:05:43.654 ****** 2026-01-01 00:35:38.276968 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:38.276978 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:38.276988 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:38.276997 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:38.277007 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:38.277016 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:38.277026 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:38.277036 | orchestrator | 2026-01-01 00:35:38.277046 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-01 00:35:38.277083 | orchestrator | Thursday 01 January 2026 00:35:11 +0000 (0:00:09.229) 0:05:52.884 ****** 2026-01-01 00:35:38.277094 | orchestrator | changed: [testbed-manager] 2026-01-01 00:35:38.277104 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:38.277113 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:38.277123 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:38.277133 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:38.277142 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:38.277152 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:38.277161 | orchestrator | 2026-01-01 00:35:38.277171 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-01 00:35:38.277181 | orchestrator | Thursday 01 January 2026 00:35:12 +0000 (0:00:00.888) 0:05:53.772 ****** 2026-01-01 00:35:38.277190 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:38.277200 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:38.277209 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:38.277219 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:38.277228 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:38.277238 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:38.277247 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:38.277257 | orchestrator | 2026-01-01 00:35:38.277266 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-01 00:35:38.277276 | orchestrator | Thursday 01 January 2026 00:35:21 +0000 (0:00:08.654) 0:06:02.427 ****** 2026-01-01 00:35:38.277286 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:38.277295 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:38.277305 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:38.277314 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:38.277324 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:38.277334 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:38.277343 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:38.277353 | orchestrator | 2026-01-01 00:35:38.277362 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-01 00:35:38.277372 | orchestrator | Thursday 01 January 2026 00:35:31 +0000 (0:00:10.728) 0:06:13.156 ****** 2026-01-01 00:35:38.277382 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-01 00:35:38.277392 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-01 00:35:38.277401 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-01 00:35:38.277411 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-01 00:35:38.277420 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-01 00:35:38.277430 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-01 00:35:38.277440 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-01 00:35:38.277450 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-01 00:35:38.277482 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-01 00:35:38.277494 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-01 00:35:38.277503 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-01 00:35:38.277513 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-01 00:35:38.277523 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-01 00:35:38.277532 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-01 00:35:38.277542 | orchestrator | 2026-01-01 00:35:38.277551 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-01 00:35:38.277561 | orchestrator | Thursday 01 January 2026 00:35:33 +0000 (0:00:01.230) 0:06:14.386 ****** 2026-01-01 00:35:38.277571 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:38.277581 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:38.277591 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:38.277600 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:38.277610 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:38.277620 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:38.277636 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:38.277646 | orchestrator | 2026-01-01 00:35:38.277655 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-01 00:35:38.277665 | orchestrator | Thursday 01 January 2026 00:35:33 +0000 (0:00:00.547) 0:06:14.933 ****** 2026-01-01 00:35:38.277675 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:38.277685 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:38.277694 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:38.277704 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:38.277713 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:38.277723 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:38.277733 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:38.277742 | orchestrator | 2026-01-01 00:35:38.277752 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-01 00:35:38.277763 | orchestrator | Thursday 01 January 2026 00:35:37 +0000 (0:00:03.619) 0:06:18.553 ****** 2026-01-01 00:35:38.277773 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:38.277783 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:38.277792 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:38.277802 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:38.277812 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:38.277822 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:38.277831 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:38.277841 | orchestrator | 2026-01-01 00:35:38.277851 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-01 00:35:38.277861 | orchestrator | Thursday 01 January 2026 00:35:37 +0000 (0:00:00.500) 0:06:19.053 ****** 2026-01-01 00:35:38.277871 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-01 00:35:38.277881 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-01 00:35:38.277891 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:38.277901 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-01 00:35:38.277910 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-01 00:35:38.277920 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:38.277929 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-01 00:35:38.277939 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-01 00:35:38.277949 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:38.277965 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-01 00:35:58.511263 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-01 00:35:58.511405 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:58.511423 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-01 00:35:58.511547 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-01 00:35:58.511564 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:58.511575 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-01 00:35:58.511586 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-01 00:35:58.511598 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:58.511609 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-01 00:35:58.511620 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-01 00:35:58.511630 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:58.511642 | orchestrator | 2026-01-01 00:35:58.511656 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-01 00:35:58.511668 | orchestrator | Thursday 01 January 2026 00:35:38 +0000 (0:00:00.792) 0:06:19.846 ****** 2026-01-01 00:35:58.511679 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:58.511691 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:58.511702 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:58.511713 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:58.511753 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:58.511765 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:58.511777 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:58.511791 | orchestrator | 2026-01-01 00:35:58.511804 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-01 00:35:58.511818 | orchestrator | Thursday 01 January 2026 00:35:39 +0000 (0:00:00.543) 0:06:20.389 ****** 2026-01-01 00:35:58.511831 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:58.511844 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:58.511858 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:58.511871 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:58.511885 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:58.511897 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:58.511910 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:58.511922 | orchestrator | 2026-01-01 00:35:58.511937 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-01 00:35:58.511951 | orchestrator | Thursday 01 January 2026 00:35:39 +0000 (0:00:00.531) 0:06:20.920 ****** 2026-01-01 00:35:58.511965 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:58.511977 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:58.511990 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:58.512003 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:58.512016 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:58.512028 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:58.512042 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:58.512055 | orchestrator | 2026-01-01 00:35:58.512068 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-01 00:35:58.512081 | orchestrator | Thursday 01 January 2026 00:35:40 +0000 (0:00:00.536) 0:06:21.457 ****** 2026-01-01 00:35:58.512094 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:58.512106 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:35:58.512117 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:35:58.512128 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:35:58.512138 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:35:58.512149 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:35:58.512160 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:35:58.512171 | orchestrator | 2026-01-01 00:35:58.512182 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-01 00:35:58.512192 | orchestrator | Thursday 01 January 2026 00:35:42 +0000 (0:00:01.881) 0:06:23.339 ****** 2026-01-01 00:35:58.512205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:35:58.512219 | orchestrator | 2026-01-01 00:35:58.512230 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-01 00:35:58.512241 | orchestrator | Thursday 01 January 2026 00:35:42 +0000 (0:00:00.907) 0:06:24.247 ****** 2026-01-01 00:35:58.512251 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:58.512262 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:58.512273 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:58.512284 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:58.512295 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:58.512305 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:58.512317 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:58.512328 | orchestrator | 2026-01-01 00:35:58.512339 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-01 00:35:58.512350 | orchestrator | Thursday 01 January 2026 00:35:43 +0000 (0:00:00.871) 0:06:25.118 ****** 2026-01-01 00:35:58.512361 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:58.512372 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:58.512388 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:58.512399 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:58.512410 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:58.512430 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:58.512441 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:58.512452 | orchestrator | 2026-01-01 00:35:58.512463 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-01 00:35:58.512474 | orchestrator | Thursday 01 January 2026 00:35:44 +0000 (0:00:00.871) 0:06:25.989 ****** 2026-01-01 00:35:58.512512 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:58.512531 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:58.512551 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:58.512571 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:58.512584 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:58.512595 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:58.512605 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:58.512616 | orchestrator | 2026-01-01 00:35:58.512627 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-01 00:35:58.512658 | orchestrator | Thursday 01 January 2026 00:35:46 +0000 (0:00:01.594) 0:06:27.583 ****** 2026-01-01 00:35:58.512670 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:58.512681 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:35:58.512692 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:35:58.512703 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:35:58.512713 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:35:58.512724 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:35:58.512735 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:35:58.512745 | orchestrator | 2026-01-01 00:35:58.512756 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-01 00:35:58.512767 | orchestrator | Thursday 01 January 2026 00:35:47 +0000 (0:00:01.324) 0:06:28.908 ****** 2026-01-01 00:35:58.512778 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:58.512788 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:58.512799 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:58.512812 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:58.512830 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:58.512847 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:58.512864 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:58.512884 | orchestrator | 2026-01-01 00:35:58.512902 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-01 00:35:58.512918 | orchestrator | Thursday 01 January 2026 00:35:48 +0000 (0:00:01.323) 0:06:30.232 ****** 2026-01-01 00:35:58.512929 | orchestrator | changed: [testbed-manager] 2026-01-01 00:35:58.512940 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:35:58.512950 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:35:58.512961 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:35:58.512972 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:35:58.512982 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:35:58.512993 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:35:58.513003 | orchestrator | 2026-01-01 00:35:58.513014 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-01 00:35:58.513025 | orchestrator | Thursday 01 January 2026 00:35:50 +0000 (0:00:01.431) 0:06:31.663 ****** 2026-01-01 00:35:58.513036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:35:58.513047 | orchestrator | 2026-01-01 00:35:58.513058 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-01 00:35:58.513069 | orchestrator | Thursday 01 January 2026 00:35:51 +0000 (0:00:01.067) 0:06:32.731 ****** 2026-01-01 00:35:58.513079 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:58.513090 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:35:58.513101 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:35:58.513111 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:35:58.513122 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:35:58.513133 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:35:58.513157 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:35:58.513169 | orchestrator | 2026-01-01 00:35:58.513180 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-01 00:35:58.513190 | orchestrator | Thursday 01 January 2026 00:35:52 +0000 (0:00:01.363) 0:06:34.094 ****** 2026-01-01 00:35:58.513201 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:58.513212 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:35:58.513223 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:35:58.513233 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:35:58.513244 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:35:58.513255 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:35:58.513266 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:35:58.513276 | orchestrator | 2026-01-01 00:35:58.513287 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-01 00:35:58.513298 | orchestrator | Thursday 01 January 2026 00:35:53 +0000 (0:00:01.166) 0:06:35.260 ****** 2026-01-01 00:35:58.513309 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:58.513319 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:35:58.513330 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:35:58.513341 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:35:58.513351 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:35:58.513362 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:35:58.513373 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:35:58.513384 | orchestrator | 2026-01-01 00:35:58.513395 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-01 00:35:58.513406 | orchestrator | Thursday 01 January 2026 00:35:55 +0000 (0:00:01.151) 0:06:36.412 ****** 2026-01-01 00:35:58.513417 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:35:58.513427 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:58.513438 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:35:58.513449 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:35:58.513459 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:35:58.513470 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:35:58.513509 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:35:58.513522 | orchestrator | 2026-01-01 00:35:58.513533 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-01 00:35:58.513544 | orchestrator | Thursday 01 January 2026 00:35:57 +0000 (0:00:02.105) 0:06:38.518 ****** 2026-01-01 00:35:58.513556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:35:58.513567 | orchestrator | 2026-01-01 00:35:58.513578 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:35:58.513588 | orchestrator | Thursday 01 January 2026 00:35:58 +0000 (0:00:00.932) 0:06:39.451 ****** 2026-01-01 00:35:58.513599 | orchestrator | 2026-01-01 00:35:58.513610 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:35:58.513621 | orchestrator | Thursday 01 January 2026 00:35:58 +0000 (0:00:00.052) 0:06:39.503 ****** 2026-01-01 00:35:58.513632 | orchestrator | 2026-01-01 00:35:58.513643 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:35:58.513654 | orchestrator | Thursday 01 January 2026 00:35:58 +0000 (0:00:00.050) 0:06:39.554 ****** 2026-01-01 00:35:58.513664 | orchestrator | 2026-01-01 00:35:58.513675 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:35:58.513695 | orchestrator | Thursday 01 January 2026 00:35:58 +0000 (0:00:00.041) 0:06:39.596 ****** 2026-01-01 00:36:24.553865 | orchestrator | 2026-01-01 00:36:24.554162 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:36:24.554918 | orchestrator | Thursday 01 January 2026 00:35:58 +0000 (0:00:00.065) 0:06:39.661 ****** 2026-01-01 00:36:24.554939 | orchestrator | 2026-01-01 00:36:24.554952 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:36:24.554964 | orchestrator | Thursday 01 January 2026 00:35:58 +0000 (0:00:00.049) 0:06:39.711 ****** 2026-01-01 00:36:24.555002 | orchestrator | 2026-01-01 00:36:24.555013 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:36:24.555024 | orchestrator | Thursday 01 January 2026 00:35:58 +0000 (0:00:00.043) 0:06:39.755 ****** 2026-01-01 00:36:24.555034 | orchestrator | 2026-01-01 00:36:24.555044 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-01 00:36:24.555054 | orchestrator | Thursday 01 January 2026 00:35:58 +0000 (0:00:00.053) 0:06:39.808 ****** 2026-01-01 00:36:24.555064 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:24.555077 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:24.555087 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:24.555098 | orchestrator | 2026-01-01 00:36:24.555108 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-01 00:36:24.555118 | orchestrator | Thursday 01 January 2026 00:35:59 +0000 (0:00:01.150) 0:06:40.959 ****** 2026-01-01 00:36:24.555128 | orchestrator | changed: [testbed-manager] 2026-01-01 00:36:24.555140 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:24.555150 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:24.555160 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:24.555170 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:24.555179 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:24.555189 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:24.555199 | orchestrator | 2026-01-01 00:36:24.555209 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-01 00:36:24.555219 | orchestrator | Thursday 01 January 2026 00:36:01 +0000 (0:00:01.573) 0:06:42.533 ****** 2026-01-01 00:36:24.555229 | orchestrator | changed: [testbed-manager] 2026-01-01 00:36:24.555239 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:24.555249 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:24.555259 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:24.555269 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:24.555278 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:24.555288 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:24.555298 | orchestrator | 2026-01-01 00:36:24.555308 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-01 00:36:24.555318 | orchestrator | Thursday 01 January 2026 00:36:02 +0000 (0:00:01.181) 0:06:43.714 ****** 2026-01-01 00:36:24.555327 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:36:24.555337 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:24.555347 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:24.555357 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:24.555367 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:24.555377 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:24.555387 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:24.555397 | orchestrator | 2026-01-01 00:36:24.555407 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-01 00:36:24.555417 | orchestrator | Thursday 01 January 2026 00:36:04 +0000 (0:00:02.326) 0:06:46.041 ****** 2026-01-01 00:36:24.555426 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:24.555436 | orchestrator | 2026-01-01 00:36:24.555446 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-01 00:36:24.555456 | orchestrator | Thursday 01 January 2026 00:36:04 +0000 (0:00:00.111) 0:06:46.153 ****** 2026-01-01 00:36:24.555465 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:24.555475 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:24.555485 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:24.555495 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:24.555526 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:24.555538 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:24.555548 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:24.555558 | orchestrator | 2026-01-01 00:36:24.555638 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-01 00:36:24.555653 | orchestrator | Thursday 01 January 2026 00:36:05 +0000 (0:00:01.107) 0:06:47.260 ****** 2026-01-01 00:36:24.555671 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:36:24.555682 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:24.555691 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:36:24.555701 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:36:24.555711 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:36:24.555721 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:36:24.555730 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:36:24.555740 | orchestrator | 2026-01-01 00:36:24.555750 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-01 00:36:24.555760 | orchestrator | Thursday 01 January 2026 00:36:06 +0000 (0:00:00.538) 0:06:47.799 ****** 2026-01-01 00:36:24.555787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:36:24.555799 | orchestrator | 2026-01-01 00:36:24.555809 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-01 00:36:24.555819 | orchestrator | Thursday 01 January 2026 00:36:07 +0000 (0:00:01.125) 0:06:48.924 ****** 2026-01-01 00:36:24.555829 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:24.555838 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:24.555848 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:24.555858 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:24.555867 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:24.555877 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:24.555887 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:24.555896 | orchestrator | 2026-01-01 00:36:24.555906 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-01 00:36:24.555916 | orchestrator | Thursday 01 January 2026 00:36:08 +0000 (0:00:00.841) 0:06:49.766 ****** 2026-01-01 00:36:24.555926 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-01 00:36:24.555975 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-01 00:36:24.555987 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-01 00:36:24.555998 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-01 00:36:24.556008 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-01 00:36:24.556017 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-01 00:36:24.556027 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-01 00:36:24.556037 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-01 00:36:24.556047 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-01 00:36:24.556057 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-01 00:36:24.556067 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-01 00:36:24.556077 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-01 00:36:24.556086 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-01 00:36:24.556096 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-01 00:36:24.556106 | orchestrator | 2026-01-01 00:36:24.556116 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-01 00:36:24.556126 | orchestrator | Thursday 01 January 2026 00:36:10 +0000 (0:00:02.418) 0:06:52.184 ****** 2026-01-01 00:36:24.556136 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:36:24.556146 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:24.556156 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:36:24.556166 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:36:24.556175 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:36:24.556185 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:36:24.556195 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:36:24.556205 | orchestrator | 2026-01-01 00:36:24.556215 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-01 00:36:24.556234 | orchestrator | Thursday 01 January 2026 00:36:11 +0000 (0:00:00.752) 0:06:52.937 ****** 2026-01-01 00:36:24.556246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:36:24.556258 | orchestrator | 2026-01-01 00:36:24.556268 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-01 00:36:24.556278 | orchestrator | Thursday 01 January 2026 00:36:12 +0000 (0:00:00.845) 0:06:53.783 ****** 2026-01-01 00:36:24.556288 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:24.556297 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:24.556307 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:24.556317 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:24.556327 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:24.556337 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:24.556346 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:24.556356 | orchestrator | 2026-01-01 00:36:24.556366 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-01 00:36:24.556376 | orchestrator | Thursday 01 January 2026 00:36:13 +0000 (0:00:00.941) 0:06:54.724 ****** 2026-01-01 00:36:24.556386 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:24.556396 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:24.556405 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:24.556415 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:24.556425 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:24.556434 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:24.556444 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:24.556454 | orchestrator | 2026-01-01 00:36:24.556464 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-01 00:36:24.556473 | orchestrator | Thursday 01 January 2026 00:36:14 +0000 (0:00:01.069) 0:06:55.794 ****** 2026-01-01 00:36:24.556483 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:36:24.556493 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:24.556503 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:36:24.556546 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:36:24.556557 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:36:24.556567 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:36:24.556577 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:36:24.556587 | orchestrator | 2026-01-01 00:36:24.556597 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-01 00:36:24.556606 | orchestrator | Thursday 01 January 2026 00:36:15 +0000 (0:00:00.526) 0:06:56.321 ****** 2026-01-01 00:36:24.556617 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:24.556627 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:24.556636 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:24.556646 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:24.556656 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:24.556666 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:24.556675 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:24.556685 | orchestrator | 2026-01-01 00:36:24.556695 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-01 00:36:24.556710 | orchestrator | Thursday 01 January 2026 00:36:16 +0000 (0:00:01.526) 0:06:57.847 ****** 2026-01-01 00:36:24.556720 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:36:24.556730 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:24.556740 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:36:24.556750 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:36:24.556759 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:36:24.556769 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:36:24.556779 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:36:24.556789 | orchestrator | 2026-01-01 00:36:24.556799 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-01 00:36:24.556808 | orchestrator | Thursday 01 January 2026 00:36:17 +0000 (0:00:00.518) 0:06:58.365 ****** 2026-01-01 00:36:24.556826 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:24.556836 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:24.556846 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:24.556856 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:24.556866 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:24.556876 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:24.556893 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:57.187624 | orchestrator | 2026-01-01 00:36:57.187754 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-01 00:36:57.187771 | orchestrator | Thursday 01 January 2026 00:36:24 +0000 (0:00:07.486) 0:07:05.852 ****** 2026-01-01 00:36:57.187784 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:57.187798 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:57.187811 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:57.187822 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:57.187833 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:57.187844 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:57.187856 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:57.187867 | orchestrator | 2026-01-01 00:36:57.187878 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-01 00:36:57.187889 | orchestrator | Thursday 01 January 2026 00:36:26 +0000 (0:00:01.643) 0:07:07.495 ****** 2026-01-01 00:36:57.187900 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:57.187912 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:57.187923 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:57.187934 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:57.187945 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:57.187956 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:57.187967 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:57.187979 | orchestrator | 2026-01-01 00:36:57.187990 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-01 00:36:57.188001 | orchestrator | Thursday 01 January 2026 00:36:27 +0000 (0:00:01.782) 0:07:09.277 ****** 2026-01-01 00:36:57.188012 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:57.188023 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:57.188034 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:57.188045 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:57.188055 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:57.188067 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:57.188077 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:57.188088 | orchestrator | 2026-01-01 00:36:57.188102 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-01 00:36:57.188115 | orchestrator | Thursday 01 January 2026 00:36:29 +0000 (0:00:01.714) 0:07:10.992 ****** 2026-01-01 00:36:57.188128 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:57.188141 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:57.188154 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:57.188168 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:57.188180 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:57.188192 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:57.188205 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:57.188217 | orchestrator | 2026-01-01 00:36:57.188230 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-01 00:36:57.188244 | orchestrator | Thursday 01 January 2026 00:36:30 +0000 (0:00:00.863) 0:07:11.855 ****** 2026-01-01 00:36:57.188256 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:36:57.188267 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:57.188278 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:36:57.188289 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:36:57.188300 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:36:57.188311 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:36:57.188322 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:36:57.188332 | orchestrator | 2026-01-01 00:36:57.188344 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-01 00:36:57.188382 | orchestrator | Thursday 01 January 2026 00:36:31 +0000 (0:00:01.174) 0:07:13.030 ****** 2026-01-01 00:36:57.188394 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:36:57.188405 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:57.188416 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:36:57.188427 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:36:57.188438 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:36:57.188449 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:36:57.188460 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:36:57.188470 | orchestrator | 2026-01-01 00:36:57.188481 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-01 00:36:57.188492 | orchestrator | Thursday 01 January 2026 00:36:32 +0000 (0:00:00.564) 0:07:13.595 ****** 2026-01-01 00:36:57.188503 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:57.188514 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:57.188524 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:57.188555 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:57.188567 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:57.188578 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:57.188589 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:57.188599 | orchestrator | 2026-01-01 00:36:57.188610 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-01 00:36:57.188621 | orchestrator | Thursday 01 January 2026 00:36:32 +0000 (0:00:00.531) 0:07:14.127 ****** 2026-01-01 00:36:57.188632 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:57.188643 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:57.188654 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:57.188665 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:57.188676 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:57.188687 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:57.188698 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:57.188708 | orchestrator | 2026-01-01 00:36:57.188720 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-01 00:36:57.188731 | orchestrator | Thursday 01 January 2026 00:36:33 +0000 (0:00:00.571) 0:07:14.698 ****** 2026-01-01 00:36:57.188741 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:57.188752 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:57.188763 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:57.188774 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:57.188785 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:57.188796 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:57.188806 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:57.188817 | orchestrator | 2026-01-01 00:36:57.188828 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-01 00:36:57.188839 | orchestrator | Thursday 01 January 2026 00:36:34 +0000 (0:00:00.745) 0:07:15.444 ****** 2026-01-01 00:36:57.188850 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:57.188861 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:57.188871 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:57.188882 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:57.188893 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:57.188904 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:57.188915 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:57.188925 | orchestrator | 2026-01-01 00:36:57.188955 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-01 00:36:57.188967 | orchestrator | Thursday 01 January 2026 00:36:39 +0000 (0:00:05.690) 0:07:21.135 ****** 2026-01-01 00:36:57.188978 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:36:57.188989 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:57.189000 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:36:57.189011 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:36:57.189022 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:36:57.189033 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:36:57.189044 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:36:57.189055 | orchestrator | 2026-01-01 00:36:57.189077 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-01 00:36:57.189088 | orchestrator | Thursday 01 January 2026 00:36:40 +0000 (0:00:00.549) 0:07:21.684 ****** 2026-01-01 00:36:57.189101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:36:57.189115 | orchestrator | 2026-01-01 00:36:57.189126 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-01 00:36:57.189157 | orchestrator | Thursday 01 January 2026 00:36:41 +0000 (0:00:01.068) 0:07:22.753 ****** 2026-01-01 00:36:57.189169 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:57.189180 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:57.189191 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:57.189202 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:57.189213 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:57.189224 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:57.189234 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:57.189245 | orchestrator | 2026-01-01 00:36:57.189256 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-01 00:36:57.189267 | orchestrator | Thursday 01 January 2026 00:36:43 +0000 (0:00:02.003) 0:07:24.756 ****** 2026-01-01 00:36:57.189278 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:57.189289 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:57.189300 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:57.189311 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:57.189321 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:57.189332 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:57.189343 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:57.189353 | orchestrator | 2026-01-01 00:36:57.189365 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-01 00:36:57.189376 | orchestrator | Thursday 01 January 2026 00:36:44 +0000 (0:00:01.132) 0:07:25.889 ****** 2026-01-01 00:36:57.189387 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:57.189398 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:57.189408 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:57.189419 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:57.189430 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:57.189441 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:57.189452 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:57.189463 | orchestrator | 2026-01-01 00:36:57.189474 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-01 00:36:57.189485 | orchestrator | Thursday 01 January 2026 00:36:45 +0000 (0:00:00.859) 0:07:26.749 ****** 2026-01-01 00:36:57.189496 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:36:57.189509 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:36:57.189520 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:36:57.189531 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:36:57.189560 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:36:57.189571 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:36:57.189596 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:36:57.189608 | orchestrator | 2026-01-01 00:36:57.189638 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-01 00:36:57.189649 | orchestrator | Thursday 01 January 2026 00:36:47 +0000 (0:00:01.933) 0:07:28.682 ****** 2026-01-01 00:36:57.189665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:36:57.189677 | orchestrator | 2026-01-01 00:36:57.189688 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-01 00:36:57.189699 | orchestrator | Thursday 01 January 2026 00:36:48 +0000 (0:00:00.853) 0:07:29.536 ****** 2026-01-01 00:36:57.189710 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:57.189721 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:57.189732 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:57.189743 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:57.189754 | orchestrator | changed: [testbed-manager] 2026-01-01 00:36:57.189765 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:57.189776 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:57.189786 | orchestrator | 2026-01-01 00:36:57.189805 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-01 00:37:28.345021 | orchestrator | Thursday 01 January 2026 00:36:57 +0000 (0:00:08.951) 0:07:38.488 ****** 2026-01-01 00:37:28.345151 | orchestrator | ok: [testbed-manager] 2026-01-01 00:37:28.345170 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:37:28.345183 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:37:28.345194 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:37:28.345205 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:37:28.345217 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:37:28.345228 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:37:28.345239 | orchestrator | 2026-01-01 00:37:28.345251 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-01 00:37:28.345263 | orchestrator | Thursday 01 January 2026 00:36:59 +0000 (0:00:01.991) 0:07:40.480 ****** 2026-01-01 00:37:28.345274 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:37:28.345285 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:37:28.345296 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:37:28.345306 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:37:28.345317 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:37:28.345328 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:37:28.345339 | orchestrator | 2026-01-01 00:37:28.345351 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-01 00:37:28.345362 | orchestrator | Thursday 01 January 2026 00:37:00 +0000 (0:00:01.314) 0:07:41.794 ****** 2026-01-01 00:37:28.345373 | orchestrator | changed: [testbed-manager] 2026-01-01 00:37:28.345386 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:37:28.345397 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:37:28.345408 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:37:28.345419 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:37:28.345430 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:37:28.345441 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:37:28.345452 | orchestrator | 2026-01-01 00:37:28.345463 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-01 00:37:28.345474 | orchestrator | 2026-01-01 00:37:28.345485 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-01 00:37:28.345496 | orchestrator | Thursday 01 January 2026 00:37:01 +0000 (0:00:01.251) 0:07:43.046 ****** 2026-01-01 00:37:28.345507 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:37:28.345520 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:37:28.345534 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:37:28.345547 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:37:28.345618 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:37:28.345632 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:37:28.345646 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:37:28.345659 | orchestrator | 2026-01-01 00:37:28.345704 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-01 00:37:28.345718 | orchestrator | 2026-01-01 00:37:28.345732 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-01 00:37:28.345744 | orchestrator | Thursday 01 January 2026 00:37:02 +0000 (0:00:00.755) 0:07:43.802 ****** 2026-01-01 00:37:28.345757 | orchestrator | changed: [testbed-manager] 2026-01-01 00:37:28.345769 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:37:28.345782 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:37:28.345794 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:37:28.345806 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:37:28.345820 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:37:28.345832 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:37:28.345843 | orchestrator | 2026-01-01 00:37:28.345854 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-01 00:37:28.345865 | orchestrator | Thursday 01 January 2026 00:37:03 +0000 (0:00:01.430) 0:07:45.232 ****** 2026-01-01 00:37:28.345876 | orchestrator | ok: [testbed-manager] 2026-01-01 00:37:28.345887 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:37:28.345899 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:37:28.345910 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:37:28.345921 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:37:28.345932 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:37:28.345943 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:37:28.345953 | orchestrator | 2026-01-01 00:37:28.345965 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-01 00:37:28.345976 | orchestrator | Thursday 01 January 2026 00:37:05 +0000 (0:00:01.493) 0:07:46.726 ****** 2026-01-01 00:37:28.345987 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:37:28.345998 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:37:28.346009 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:37:28.346087 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:37:28.346100 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:37:28.346111 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:37:28.346122 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:37:28.346132 | orchestrator | 2026-01-01 00:37:28.346144 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-01 00:37:28.346155 | orchestrator | Thursday 01 January 2026 00:37:05 +0000 (0:00:00.553) 0:07:47.279 ****** 2026-01-01 00:37:28.346167 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:37:28.346180 | orchestrator | 2026-01-01 00:37:28.346191 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-01 00:37:28.346218 | orchestrator | Thursday 01 January 2026 00:37:07 +0000 (0:00:01.112) 0:07:48.392 ****** 2026-01-01 00:37:28.346232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:37:28.346246 | orchestrator | 2026-01-01 00:37:28.346257 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-01 00:37:28.346268 | orchestrator | Thursday 01 January 2026 00:37:07 +0000 (0:00:00.848) 0:07:49.241 ****** 2026-01-01 00:37:28.346279 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:37:28.346289 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:37:28.346300 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:37:28.346311 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:37:28.346322 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:37:28.346334 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:37:28.346345 | orchestrator | changed: [testbed-manager] 2026-01-01 00:37:28.346356 | orchestrator | 2026-01-01 00:37:28.346387 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-01 00:37:28.346399 | orchestrator | Thursday 01 January 2026 00:37:16 +0000 (0:00:08.659) 0:07:57.900 ****** 2026-01-01 00:37:28.346420 | orchestrator | changed: [testbed-manager] 2026-01-01 00:37:28.346432 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:37:28.346443 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:37:28.346454 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:37:28.346465 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:37:28.346476 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:37:28.346487 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:37:28.346498 | orchestrator | 2026-01-01 00:37:28.346509 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-01 00:37:28.346520 | orchestrator | Thursday 01 January 2026 00:37:17 +0000 (0:00:00.861) 0:07:58.762 ****** 2026-01-01 00:37:28.346531 | orchestrator | changed: [testbed-manager] 2026-01-01 00:37:28.346542 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:37:28.346553 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:37:28.346584 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:37:28.346595 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:37:28.346606 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:37:28.346617 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:37:28.346628 | orchestrator | 2026-01-01 00:37:28.346639 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-01 00:37:28.346649 | orchestrator | Thursday 01 January 2026 00:37:18 +0000 (0:00:01.337) 0:08:00.099 ****** 2026-01-01 00:37:28.346660 | orchestrator | changed: [testbed-manager] 2026-01-01 00:37:28.346671 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:37:28.346682 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:37:28.346693 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:37:28.346703 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:37:28.346714 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:37:28.346725 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:37:28.346735 | orchestrator | 2026-01-01 00:37:28.346746 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-01 00:37:28.346757 | orchestrator | Thursday 01 January 2026 00:37:20 +0000 (0:00:01.944) 0:08:02.043 ****** 2026-01-01 00:37:28.346768 | orchestrator | changed: [testbed-manager] 2026-01-01 00:37:28.346779 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:37:28.346790 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:37:28.346800 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:37:28.346811 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:37:28.346822 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:37:28.346833 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:37:28.346843 | orchestrator | 2026-01-01 00:37:28.346854 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-01 00:37:28.346865 | orchestrator | Thursday 01 January 2026 00:37:22 +0000 (0:00:01.281) 0:08:03.325 ****** 2026-01-01 00:37:28.346876 | orchestrator | changed: [testbed-manager] 2026-01-01 00:37:28.346887 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:37:28.346898 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:37:28.346909 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:37:28.346920 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:37:28.346931 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:37:28.346942 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:37:28.346952 | orchestrator | 2026-01-01 00:37:28.346963 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-01 00:37:28.346974 | orchestrator | 2026-01-01 00:37:28.346985 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-01 00:37:28.346996 | orchestrator | Thursday 01 January 2026 00:37:23 +0000 (0:00:01.178) 0:08:04.503 ****** 2026-01-01 00:37:28.347008 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:37:28.347019 | orchestrator | 2026-01-01 00:37:28.347030 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-01 00:37:28.347041 | orchestrator | Thursday 01 January 2026 00:37:24 +0000 (0:00:00.915) 0:08:05.419 ****** 2026-01-01 00:37:28.347059 | orchestrator | ok: [testbed-manager] 2026-01-01 00:37:28.347070 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:37:28.347081 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:37:28.347092 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:37:28.347103 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:37:28.347113 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:37:28.347124 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:37:28.347135 | orchestrator | 2026-01-01 00:37:28.347146 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-01 00:37:28.347157 | orchestrator | Thursday 01 January 2026 00:37:25 +0000 (0:00:01.074) 0:08:06.493 ****** 2026-01-01 00:37:28.347168 | orchestrator | changed: [testbed-manager] 2026-01-01 00:37:28.347179 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:37:28.347190 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:37:28.347201 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:37:28.347212 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:37:28.347223 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:37:28.347234 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:37:28.347245 | orchestrator | 2026-01-01 00:37:28.347261 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-01 00:37:28.347272 | orchestrator | Thursday 01 January 2026 00:37:26 +0000 (0:00:01.180) 0:08:07.674 ****** 2026-01-01 00:37:28.347284 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:37:28.347295 | orchestrator | 2026-01-01 00:37:28.347306 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-01 00:37:28.347317 | orchestrator | Thursday 01 January 2026 00:37:27 +0000 (0:00:01.106) 0:08:08.781 ****** 2026-01-01 00:37:28.347327 | orchestrator | ok: [testbed-manager] 2026-01-01 00:37:28.347338 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:37:28.347349 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:37:28.347360 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:37:28.347371 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:37:28.347382 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:37:28.347393 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:37:28.347404 | orchestrator | 2026-01-01 00:37:28.347421 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-01 00:37:30.106488 | orchestrator | Thursday 01 January 2026 00:37:28 +0000 (0:00:00.864) 0:08:09.645 ****** 2026-01-01 00:37:30.106683 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:37:30.106702 | orchestrator | changed: [testbed-manager] 2026-01-01 00:37:30.106714 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:37:30.106727 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:37:30.106738 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:37:30.106750 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:37:30.106761 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:37:30.106776 | orchestrator | 2026-01-01 00:37:30.106800 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:37:30.106821 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-01 00:37:30.106842 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-01 00:37:30.106862 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-01 00:37:30.106882 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-01 00:37:30.106901 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-01 00:37:30.106952 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-01 00:37:30.106964 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-01 00:37:30.106975 | orchestrator | 2026-01-01 00:37:30.106986 | orchestrator | 2026-01-01 00:37:30.106997 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:37:30.107010 | orchestrator | Thursday 01 January 2026 00:37:29 +0000 (0:00:01.174) 0:08:10.820 ****** 2026-01-01 00:37:30.107024 | orchestrator | =============================================================================== 2026-01-01 00:37:30.107037 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.25s 2026-01-01 00:37:30.107049 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.73s 2026-01-01 00:37:30.107063 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.34s 2026-01-01 00:37:30.107077 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.95s 2026-01-01 00:37:30.107090 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.64s 2026-01-01 00:37:30.107103 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.06s 2026-01-01 00:37:30.107114 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.73s 2026-01-01 00:37:30.107125 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.23s 2026-01-01 00:37:30.107136 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.95s 2026-01-01 00:37:30.107146 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.66s 2026-01-01 00:37:30.107157 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.65s 2026-01-01 00:37:30.107168 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.18s 2026-01-01 00:37:30.107178 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.90s 2026-01-01 00:37:30.107189 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.59s 2026-01-01 00:37:30.107201 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.49s 2026-01-01 00:37:30.107212 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.49s 2026-01-01 00:37:30.107222 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.33s 2026-01-01 00:37:30.107233 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.05s 2026-01-01 00:37:30.107244 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.81s 2026-01-01 00:37:30.107273 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.69s 2026-01-01 00:37:30.598190 | orchestrator | + osism apply fail2ban 2026-01-01 00:37:43.600465 | orchestrator | 2026-01-01 00:37:43 | INFO  | Task a7b32ccf-471d-4181-b711-92bab2cff503 (fail2ban) was prepared for execution. 2026-01-01 00:37:43.600649 | orchestrator | 2026-01-01 00:37:43 | INFO  | It takes a moment until task a7b32ccf-471d-4181-b711-92bab2cff503 (fail2ban) has been started and output is visible here. 2026-01-01 00:38:06.207391 | orchestrator | 2026-01-01 00:38:06.207560 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-01 00:38:06.207627 | orchestrator | 2026-01-01 00:38:06.207649 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-01 00:38:06.207668 | orchestrator | Thursday 01 January 2026 00:37:48 +0000 (0:00:00.390) 0:00:00.390 ****** 2026-01-01 00:38:06.207691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:38:06.207846 | orchestrator | 2026-01-01 00:38:06.207873 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-01 00:38:06.207895 | orchestrator | Thursday 01 January 2026 00:37:49 +0000 (0:00:01.199) 0:00:01.590 ****** 2026-01-01 00:38:06.207916 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:38:06.207941 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:38:06.207963 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:38:06.207978 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:38:06.207992 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:38:06.208005 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:38:06.208018 | orchestrator | changed: [testbed-manager] 2026-01-01 00:38:06.208032 | orchestrator | 2026-01-01 00:38:06.208045 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-01 00:38:06.208058 | orchestrator | Thursday 01 January 2026 00:38:01 +0000 (0:00:11.221) 0:00:12.812 ****** 2026-01-01 00:38:06.208071 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:38:06.208084 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:38:06.208098 | orchestrator | changed: [testbed-manager] 2026-01-01 00:38:06.208111 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:38:06.208124 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:38:06.208137 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:38:06.208149 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:38:06.208162 | orchestrator | 2026-01-01 00:38:06.208175 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-01 00:38:06.208189 | orchestrator | Thursday 01 January 2026 00:38:02 +0000 (0:00:01.523) 0:00:14.335 ****** 2026-01-01 00:38:06.208202 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:38:06.208216 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:38:06.208230 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:06.208244 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:38:06.208255 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:38:06.208265 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:38:06.208276 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:38:06.208286 | orchestrator | 2026-01-01 00:38:06.208297 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-01 00:38:06.208308 | orchestrator | Thursday 01 January 2026 00:38:04 +0000 (0:00:01.488) 0:00:15.824 ****** 2026-01-01 00:38:06.208319 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:38:06.208330 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:38:06.208340 | orchestrator | changed: [testbed-manager] 2026-01-01 00:38:06.208351 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:38:06.208363 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:38:06.208373 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:38:06.208384 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:38:06.208395 | orchestrator | 2026-01-01 00:38:06.208406 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:38:06.208417 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:38:06.208429 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:38:06.208440 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:38:06.208451 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:38:06.208461 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:38:06.208472 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:38:06.208483 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:38:06.208504 | orchestrator | 2026-01-01 00:38:06.208515 | orchestrator | 2026-01-01 00:38:06.208526 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:38:06.208536 | orchestrator | Thursday 01 January 2026 00:38:05 +0000 (0:00:01.669) 0:00:17.493 ****** 2026-01-01 00:38:06.208547 | orchestrator | =============================================================================== 2026-01-01 00:38:06.208558 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.22s 2026-01-01 00:38:06.208568 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.67s 2026-01-01 00:38:06.208618 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.52s 2026-01-01 00:38:06.208630 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.49s 2026-01-01 00:38:06.208641 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.20s 2026-01-01 00:38:06.547824 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-01 00:38:06.547939 | orchestrator | + osism apply network 2026-01-01 00:38:18.666223 | orchestrator | 2026-01-01 00:38:18 | INFO  | Task 4bfd02d0-0066-494d-a96a-b66880cc86c7 (network) was prepared for execution. 2026-01-01 00:38:18.666319 | orchestrator | 2026-01-01 00:38:18 | INFO  | It takes a moment until task 4bfd02d0-0066-494d-a96a-b66880cc86c7 (network) has been started and output is visible here. 2026-01-01 00:38:47.577196 | orchestrator | 2026-01-01 00:38:47.577312 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-01 00:38:47.577333 | orchestrator | 2026-01-01 00:38:47.577342 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-01 00:38:47.577351 | orchestrator | Thursday 01 January 2026 00:38:23 +0000 (0:00:00.288) 0:00:00.288 ****** 2026-01-01 00:38:47.577361 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:47.577371 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:38:47.577380 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:38:47.577388 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:38:47.577397 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:38:47.577405 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:38:47.577413 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:38:47.577421 | orchestrator | 2026-01-01 00:38:47.577429 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-01 00:38:47.577437 | orchestrator | Thursday 01 January 2026 00:38:24 +0000 (0:00:00.862) 0:00:01.150 ****** 2026-01-01 00:38:47.577448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:38:47.577459 | orchestrator | 2026-01-01 00:38:47.577467 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-01 00:38:47.577475 | orchestrator | Thursday 01 January 2026 00:38:25 +0000 (0:00:01.407) 0:00:02.558 ****** 2026-01-01 00:38:47.577483 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:47.577491 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:38:47.577499 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:38:47.577507 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:38:47.577515 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:38:47.577523 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:38:47.577531 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:38:47.577539 | orchestrator | 2026-01-01 00:38:47.577547 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-01 00:38:47.577555 | orchestrator | Thursday 01 January 2026 00:38:27 +0000 (0:00:02.183) 0:00:04.741 ****** 2026-01-01 00:38:47.577563 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:47.577571 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:38:47.577578 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:38:47.577586 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:38:47.577594 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:38:47.577655 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:38:47.577666 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:38:47.577674 | orchestrator | 2026-01-01 00:38:47.577682 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-01 00:38:47.577690 | orchestrator | Thursday 01 January 2026 00:38:29 +0000 (0:00:01.885) 0:00:06.627 ****** 2026-01-01 00:38:47.577698 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-01 00:38:47.577707 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-01 00:38:47.577715 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-01 00:38:47.577723 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-01 00:38:47.577730 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-01 00:38:47.577738 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-01 00:38:47.577746 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-01 00:38:47.577754 | orchestrator | 2026-01-01 00:38:47.577762 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-01 00:38:47.577769 | orchestrator | Thursday 01 January 2026 00:38:30 +0000 (0:00:00.937) 0:00:07.565 ****** 2026-01-01 00:38:47.577777 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:38:47.577786 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:38:47.577794 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-01 00:38:47.577802 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-01 00:38:47.577810 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-01 00:38:47.577817 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-01 00:38:47.577825 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-01 00:38:47.577833 | orchestrator | 2026-01-01 00:38:47.577841 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-01 00:38:47.577849 | orchestrator | Thursday 01 January 2026 00:38:33 +0000 (0:00:02.888) 0:00:10.453 ****** 2026-01-01 00:38:47.577857 | orchestrator | changed: [testbed-manager] 2026-01-01 00:38:47.577865 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:38:47.577873 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:38:47.577881 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:38:47.577889 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:38:47.577896 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:38:47.577904 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:38:47.577912 | orchestrator | 2026-01-01 00:38:47.577920 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-01 00:38:47.577928 | orchestrator | Thursday 01 January 2026 00:38:34 +0000 (0:00:01.506) 0:00:11.960 ****** 2026-01-01 00:38:47.577935 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:38:47.577943 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-01 00:38:47.577951 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:38:47.577959 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-01 00:38:47.577967 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-01 00:38:47.577974 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-01 00:38:47.577982 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-01 00:38:47.577990 | orchestrator | 2026-01-01 00:38:47.577998 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-01 00:38:47.578006 | orchestrator | Thursday 01 January 2026 00:38:36 +0000 (0:00:01.781) 0:00:13.741 ****** 2026-01-01 00:38:47.578014 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:47.578076 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:38:47.578084 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:38:47.578092 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:38:47.578100 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:38:47.578109 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:38:47.578117 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:38:47.578125 | orchestrator | 2026-01-01 00:38:47.578133 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-01 00:38:47.578158 | orchestrator | Thursday 01 January 2026 00:38:37 +0000 (0:00:01.180) 0:00:14.922 ****** 2026-01-01 00:38:47.578174 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:38:47.578183 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:38:47.578191 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:38:47.578199 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:38:47.578207 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:38:47.578215 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:38:47.578223 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:38:47.578231 | orchestrator | 2026-01-01 00:38:47.578252 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-01 00:38:47.578260 | orchestrator | Thursday 01 January 2026 00:38:38 +0000 (0:00:00.708) 0:00:15.631 ****** 2026-01-01 00:38:47.578269 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:38:47.578277 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:47.578284 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:38:47.578292 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:38:47.578300 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:38:47.578308 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:38:47.578316 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:38:47.578324 | orchestrator | 2026-01-01 00:38:47.578332 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-01 00:38:47.578340 | orchestrator | Thursday 01 January 2026 00:38:40 +0000 (0:00:02.161) 0:00:17.792 ****** 2026-01-01 00:38:47.578348 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:38:47.578356 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:38:47.578364 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:38:47.578372 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:38:47.578380 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:38:47.578388 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:38:47.578397 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-01 00:38:47.578406 | orchestrator | 2026-01-01 00:38:47.578414 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-01 00:38:47.578422 | orchestrator | Thursday 01 January 2026 00:38:41 +0000 (0:00:00.935) 0:00:18.728 ****** 2026-01-01 00:38:47.578430 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:47.578438 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:38:47.578446 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:38:47.578453 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:38:47.578461 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:38:47.578469 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:38:47.578477 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:38:47.578485 | orchestrator | 2026-01-01 00:38:47.578493 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-01 00:38:47.578501 | orchestrator | Thursday 01 January 2026 00:38:43 +0000 (0:00:01.709) 0:00:20.437 ****** 2026-01-01 00:38:47.578509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:38:47.578520 | orchestrator | 2026-01-01 00:38:47.578528 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-01 00:38:47.578535 | orchestrator | Thursday 01 January 2026 00:38:44 +0000 (0:00:01.345) 0:00:21.783 ****** 2026-01-01 00:38:47.578543 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:38:47.578551 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:47.578559 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:38:47.578567 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:38:47.578575 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:38:47.578583 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:38:47.578591 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:38:47.578598 | orchestrator | 2026-01-01 00:38:47.578606 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-01 00:38:47.578640 | orchestrator | Thursday 01 January 2026 00:38:45 +0000 (0:00:01.094) 0:00:22.877 ****** 2026-01-01 00:38:47.578649 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:47.578657 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:38:47.578665 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:38:47.578673 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:38:47.578680 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:38:47.578688 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:38:47.578696 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:38:47.578704 | orchestrator | 2026-01-01 00:38:47.578712 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-01 00:38:47.578719 | orchestrator | Thursday 01 January 2026 00:38:46 +0000 (0:00:00.610) 0:00:23.488 ****** 2026-01-01 00:38:47.578727 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:38:47.578735 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:38:47.578743 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:38:47.578751 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:38:47.578759 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:38:47.578767 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:38:47.578779 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:38:47.578787 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:38:47.578795 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:38:47.578803 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:38:47.578810 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:38:47.578818 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:38:47.578826 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:38:47.578834 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:38:47.578842 | orchestrator | 2026-01-01 00:38:47.578856 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-01 00:39:04.817526 | orchestrator | Thursday 01 January 2026 00:38:47 +0000 (0:00:01.144) 0:00:24.632 ****** 2026-01-01 00:39:04.817698 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:39:04.817724 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:39:04.817739 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:39:04.817752 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:39:04.817766 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:39:04.817779 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:39:04.817791 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:39:04.817803 | orchestrator | 2026-01-01 00:39:04.817817 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-01 00:39:04.817830 | orchestrator | Thursday 01 January 2026 00:38:48 +0000 (0:00:00.582) 0:00:25.215 ****** 2026-01-01 00:39:04.817845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-3, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-5 2026-01-01 00:39:04.817859 | orchestrator | 2026-01-01 00:39:04.817871 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-01 00:39:04.817883 | orchestrator | Thursday 01 January 2026 00:38:52 +0000 (0:00:04.513) 0:00:29.728 ****** 2026-01-01 00:39:04.817899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.817912 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.817976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.817991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818086 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.818103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.818173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.818213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.818227 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.818240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.818253 | orchestrator | 2026-01-01 00:39:04.818266 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-01 00:39:04.818278 | orchestrator | Thursday 01 January 2026 00:38:58 +0000 (0:00:06.270) 0:00:35.999 ****** 2026-01-01 00:39:04.818304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818330 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:39:04.818393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.818406 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.818419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.818441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.818455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:04.818483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:19.121747 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:39:19.121893 | orchestrator | 2026-01-01 00:39:19.121911 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-01 00:39:19.121925 | orchestrator | Thursday 01 January 2026 00:39:04 +0000 (0:00:05.868) 0:00:41.867 ****** 2026-01-01 00:39:19.121939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:39:19.121951 | orchestrator | 2026-01-01 00:39:19.121962 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-01 00:39:19.121974 | orchestrator | Thursday 01 January 2026 00:39:05 +0000 (0:00:01.177) 0:00:43.045 ****** 2026-01-01 00:39:19.121985 | orchestrator | ok: [testbed-manager] 2026-01-01 00:39:19.121999 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:39:19.122010 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:39:19.122079 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:39:19.122092 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:39:19.122103 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:39:19.122114 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:39:19.122125 | orchestrator | 2026-01-01 00:39:19.122137 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-01 00:39:19.122148 | orchestrator | Thursday 01 January 2026 00:39:07 +0000 (0:00:01.207) 0:00:44.252 ****** 2026-01-01 00:39:19.122159 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:39:19.122172 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:39:19.122183 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:39:19.122194 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:39:19.122205 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:39:19.122217 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:39:19.122228 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:39:19.122239 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:39:19.122250 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:39:19.122261 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:39:19.122272 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:39:19.122283 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:39:19.122294 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:39:19.122305 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:39:19.122316 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:39:19.122327 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:39:19.122338 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:39:19.122350 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:39:19.122370 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:39:19.122388 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:39:19.122407 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:39:19.122427 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:39:19.122446 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:39:19.122465 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:39:19.122487 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:39:19.122498 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:39:19.122524 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:39:19.122536 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:39:19.122547 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:39:19.122558 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:39:19.122569 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:39:19.122580 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:39:19.122590 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:39:19.122601 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:39:19.122612 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:39:19.122623 | orchestrator | 2026-01-01 00:39:19.122948 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-01 00:39:19.123050 | orchestrator | Thursday 01 January 2026 00:39:08 +0000 (0:00:01.007) 0:00:45.260 ****** 2026-01-01 00:39:19.123069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:39:19.123085 | orchestrator | 2026-01-01 00:39:19.123097 | orchestrator | TASK [osism.commons.network : Install required packages for network-extra-init] *** 2026-01-01 00:39:19.123109 | orchestrator | Thursday 01 January 2026 00:39:09 +0000 (0:00:01.307) 0:00:46.567 ****** 2026-01-01 00:39:19.123121 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:39:19.123135 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:39:19.123146 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:39:19.123157 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:39:19.123168 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:39:19.123179 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:39:19.123190 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:39:19.123201 | orchestrator | 2026-01-01 00:39:19.123212 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-01 00:39:19.123223 | orchestrator | Thursday 01 January 2026 00:39:10 +0000 (0:00:00.662) 0:00:47.230 ****** 2026-01-01 00:39:19.123234 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:39:19.123245 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:39:19.123256 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:39:19.123267 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:39:19.123277 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:39:19.123288 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:39:19.123299 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:39:19.123310 | orchestrator | 2026-01-01 00:39:19.123321 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-01 00:39:19.123332 | orchestrator | Thursday 01 January 2026 00:39:11 +0000 (0:00:00.850) 0:00:48.080 ****** 2026-01-01 00:39:19.123343 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:39:19.123353 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:39:19.123364 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:39:19.123375 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:39:19.123386 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:39:19.123396 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:39:19.123407 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:39:19.123418 | orchestrator | 2026-01-01 00:39:19.123429 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-01 00:39:19.123440 | orchestrator | Thursday 01 January 2026 00:39:11 +0000 (0:00:00.682) 0:00:48.762 ****** 2026-01-01 00:39:19.123487 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:39:19.123498 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:39:19.123509 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:39:19.123520 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:39:19.123530 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:39:19.123541 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:39:19.123552 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:39:19.123563 | orchestrator | 2026-01-01 00:39:19.123574 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-01 00:39:19.123585 | orchestrator | Thursday 01 January 2026 00:39:12 +0000 (0:00:00.894) 0:00:49.657 ****** 2026-01-01 00:39:19.123596 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:39:19.123608 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:39:19.123619 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:39:19.123672 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:39:19.123685 | orchestrator | ok: [testbed-manager] 2026-01-01 00:39:19.123695 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:39:19.123707 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:39:19.123717 | orchestrator | 2026-01-01 00:39:19.123728 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-01 00:39:19.123739 | orchestrator | Thursday 01 January 2026 00:39:14 +0000 (0:00:01.518) 0:00:51.175 ****** 2026-01-01 00:39:19.123750 | orchestrator | ok: [testbed-manager] 2026-01-01 00:39:19.123761 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:39:19.123772 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:39:19.123783 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:39:19.123794 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:39:19.123804 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:39:19.123815 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:39:19.123826 | orchestrator | 2026-01-01 00:39:19.123837 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-01 00:39:19.123848 | orchestrator | Thursday 01 January 2026 00:39:15 +0000 (0:00:01.289) 0:00:52.465 ****** 2026-01-01 00:39:19.123859 | orchestrator | ok: [testbed-manager] 2026-01-01 00:39:19.123869 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:39:19.123880 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:39:19.123891 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:39:19.123902 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:39:19.123912 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:39:19.123923 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:39:19.123934 | orchestrator | 2026-01-01 00:39:19.123945 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-01 00:39:19.123976 | orchestrator | Thursday 01 January 2026 00:39:17 +0000 (0:00:02.380) 0:00:54.845 ****** 2026-01-01 00:39:19.123987 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:39:19.123998 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:39:19.124009 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:39:19.124020 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:39:19.124031 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:39:19.124042 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:39:19.124053 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:39:19.124063 | orchestrator | 2026-01-01 00:39:19.124074 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-01 00:39:19.124085 | orchestrator | Thursday 01 January 2026 00:39:18 +0000 (0:00:00.677) 0:00:55.523 ****** 2026-01-01 00:39:19.124096 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:39:19.124107 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:39:19.124118 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:39:19.124128 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:39:19.124139 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:39:19.124150 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:39:19.124161 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:39:19.124172 | orchestrator | 2026-01-01 00:39:19.124183 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:39:19.389014 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-01 00:39:19.389155 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:39:19.389182 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:39:19.389203 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:39:19.389224 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:39:19.389243 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:39:19.389254 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:39:19.389266 | orchestrator | 2026-01-01 00:39:19.389278 | orchestrator | 2026-01-01 00:39:19.389290 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:39:19.389303 | orchestrator | Thursday 01 January 2026 00:39:19 +0000 (0:00:00.659) 0:00:56.183 ****** 2026-01-01 00:39:19.389314 | orchestrator | =============================================================================== 2026-01-01 00:39:19.389325 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.27s 2026-01-01 00:39:19.389335 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.87s 2026-01-01 00:39:19.389346 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.51s 2026-01-01 00:39:19.389357 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.89s 2026-01-01 00:39:19.389368 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.38s 2026-01-01 00:39:19.389378 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.18s 2026-01-01 00:39:19.389389 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.16s 2026-01-01 00:39:19.389400 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.89s 2026-01-01 00:39:19.389410 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.78s 2026-01-01 00:39:19.389421 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.71s 2026-01-01 00:39:19.389432 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.52s 2026-01-01 00:39:19.389442 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.51s 2026-01-01 00:39:19.389453 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.41s 2026-01-01 00:39:19.389464 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.35s 2026-01-01 00:39:19.389474 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.31s 2026-01-01 00:39:19.389485 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.29s 2026-01-01 00:39:19.389496 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2026-01-01 00:39:19.389506 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.18s 2026-01-01 00:39:19.389522 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.18s 2026-01-01 00:39:19.389535 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.14s 2026-01-01 00:39:19.611756 | orchestrator | + osism apply wireguard 2026-01-01 00:39:31.619542 | orchestrator | 2026-01-01 00:39:31 | INFO  | Task cfd4dbb5-dc2c-40a2-8c67-7b7f4f28e60f (wireguard) was prepared for execution. 2026-01-01 00:39:31.619809 | orchestrator | 2026-01-01 00:39:31 | INFO  | It takes a moment until task cfd4dbb5-dc2c-40a2-8c67-7b7f4f28e60f (wireguard) has been started and output is visible here. 2026-01-01 00:39:51.271801 | orchestrator | 2026-01-01 00:39:51.271937 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-01 00:39:51.271954 | orchestrator | 2026-01-01 00:39:51.271966 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-01 00:39:51.271978 | orchestrator | Thursday 01 January 2026 00:39:35 +0000 (0:00:00.227) 0:00:00.227 ****** 2026-01-01 00:39:51.271990 | orchestrator | ok: [testbed-manager] 2026-01-01 00:39:51.272003 | orchestrator | 2026-01-01 00:39:51.272015 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-01 00:39:51.272026 | orchestrator | Thursday 01 January 2026 00:39:36 +0000 (0:00:01.289) 0:00:01.517 ****** 2026-01-01 00:39:51.272037 | orchestrator | changed: [testbed-manager] 2026-01-01 00:39:51.272049 | orchestrator | 2026-01-01 00:39:51.272060 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-01 00:39:51.272071 | orchestrator | Thursday 01 January 2026 00:39:43 +0000 (0:00:06.893) 0:00:08.410 ****** 2026-01-01 00:39:51.272082 | orchestrator | changed: [testbed-manager] 2026-01-01 00:39:51.272093 | orchestrator | 2026-01-01 00:39:51.272104 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-01 00:39:51.272115 | orchestrator | Thursday 01 January 2026 00:39:44 +0000 (0:00:00.606) 0:00:09.017 ****** 2026-01-01 00:39:51.272125 | orchestrator | changed: [testbed-manager] 2026-01-01 00:39:51.272136 | orchestrator | 2026-01-01 00:39:51.272147 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-01 00:39:51.272158 | orchestrator | Thursday 01 January 2026 00:39:44 +0000 (0:00:00.458) 0:00:09.475 ****** 2026-01-01 00:39:51.272169 | orchestrator | ok: [testbed-manager] 2026-01-01 00:39:51.272180 | orchestrator | 2026-01-01 00:39:51.272191 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-01 00:39:51.272202 | orchestrator | Thursday 01 January 2026 00:39:45 +0000 (0:00:00.768) 0:00:10.244 ****** 2026-01-01 00:39:51.272213 | orchestrator | ok: [testbed-manager] 2026-01-01 00:39:51.272224 | orchestrator | 2026-01-01 00:39:51.272235 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-01 00:39:51.272247 | orchestrator | Thursday 01 January 2026 00:39:45 +0000 (0:00:00.450) 0:00:10.694 ****** 2026-01-01 00:39:51.272258 | orchestrator | ok: [testbed-manager] 2026-01-01 00:39:51.272269 | orchestrator | 2026-01-01 00:39:51.272280 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-01 00:39:51.272293 | orchestrator | Thursday 01 January 2026 00:39:46 +0000 (0:00:00.446) 0:00:11.140 ****** 2026-01-01 00:39:51.272306 | orchestrator | changed: [testbed-manager] 2026-01-01 00:39:51.272318 | orchestrator | 2026-01-01 00:39:51.272331 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-01 00:39:51.272344 | orchestrator | Thursday 01 January 2026 00:39:47 +0000 (0:00:01.101) 0:00:12.242 ****** 2026-01-01 00:39:51.272357 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:39:51.272370 | orchestrator | changed: [testbed-manager] 2026-01-01 00:39:51.272383 | orchestrator | 2026-01-01 00:39:51.272396 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-01 00:39:51.272408 | orchestrator | Thursday 01 January 2026 00:39:48 +0000 (0:00:00.885) 0:00:13.128 ****** 2026-01-01 00:39:51.272421 | orchestrator | changed: [testbed-manager] 2026-01-01 00:39:51.272434 | orchestrator | 2026-01-01 00:39:51.272447 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-01 00:39:51.272459 | orchestrator | Thursday 01 January 2026 00:39:49 +0000 (0:00:01.527) 0:00:14.655 ****** 2026-01-01 00:39:51.272472 | orchestrator | changed: [testbed-manager] 2026-01-01 00:39:51.272485 | orchestrator | 2026-01-01 00:39:51.272498 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:39:51.272511 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:39:51.272559 | orchestrator | 2026-01-01 00:39:51.272573 | orchestrator | 2026-01-01 00:39:51.272586 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:39:51.272599 | orchestrator | Thursday 01 January 2026 00:39:50 +0000 (0:00:00.916) 0:00:15.572 ****** 2026-01-01 00:39:51.272611 | orchestrator | =============================================================================== 2026-01-01 00:39:51.272625 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.89s 2026-01-01 00:39:51.272638 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.53s 2026-01-01 00:39:51.272667 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.29s 2026-01-01 00:39:51.272679 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.10s 2026-01-01 00:39:51.272713 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2026-01-01 00:39:51.272725 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.89s 2026-01-01 00:39:51.272736 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.77s 2026-01-01 00:39:51.272747 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.61s 2026-01-01 00:39:51.272757 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2026-01-01 00:39:51.272768 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.45s 2026-01-01 00:39:51.272779 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2026-01-01 00:39:51.628219 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-01 00:39:51.670088 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-01 00:39:51.670193 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-01 00:39:51.754264 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 13 100 13 0 0 154 0 --:--:-- --:--:-- --:--:-- 154 2026-01-01 00:39:51.768446 | orchestrator | + osism apply --environment custom workarounds 2026-01-01 00:39:53.843880 | orchestrator | 2026-01-01 00:39:53 | INFO  | Trying to run play workarounds in environment custom 2026-01-01 00:40:03.985611 | orchestrator | 2026-01-01 00:40:03 | INFO  | Task 3dd59b80-a33e-4961-889c-aadd1918f174 (workarounds) was prepared for execution. 2026-01-01 00:40:03.985728 | orchestrator | 2026-01-01 00:40:03 | INFO  | It takes a moment until task 3dd59b80-a33e-4961-889c-aadd1918f174 (workarounds) has been started and output is visible here. 2026-01-01 00:40:29.776040 | orchestrator | 2026-01-01 00:40:29.776163 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:40:29.776179 | orchestrator | 2026-01-01 00:40:29.776191 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-01 00:40:29.776202 | orchestrator | Thursday 01 January 2026 00:40:08 +0000 (0:00:00.130) 0:00:00.130 ****** 2026-01-01 00:40:29.776214 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-01 00:40:29.776227 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-01 00:40:29.776238 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-01 00:40:29.776249 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-01 00:40:29.776260 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-01 00:40:29.776270 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-01 00:40:29.776281 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-01 00:40:29.776292 | orchestrator | 2026-01-01 00:40:29.776303 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-01 00:40:29.776344 | orchestrator | 2026-01-01 00:40:29.776356 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-01 00:40:29.776367 | orchestrator | Thursday 01 January 2026 00:40:09 +0000 (0:00:00.882) 0:00:01.012 ****** 2026-01-01 00:40:29.776378 | orchestrator | ok: [testbed-manager] 2026-01-01 00:40:29.776391 | orchestrator | 2026-01-01 00:40:29.776402 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-01 00:40:29.776413 | orchestrator | 2026-01-01 00:40:29.776424 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-01 00:40:29.776435 | orchestrator | Thursday 01 January 2026 00:40:11 +0000 (0:00:02.586) 0:00:03.599 ****** 2026-01-01 00:40:29.776445 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:40:29.776456 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:40:29.776467 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:40:29.776478 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:40:29.776489 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:40:29.776500 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:40:29.776511 | orchestrator | 2026-01-01 00:40:29.776522 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-01 00:40:29.776532 | orchestrator | 2026-01-01 00:40:29.776543 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-01 00:40:29.776554 | orchestrator | Thursday 01 January 2026 00:40:13 +0000 (0:00:01.804) 0:00:05.403 ****** 2026-01-01 00:40:29.776567 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:40:29.776583 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:40:29.776596 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:40:29.776610 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:40:29.776623 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:40:29.776636 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:40:29.776650 | orchestrator | 2026-01-01 00:40:29.776695 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-01 00:40:29.776710 | orchestrator | Thursday 01 January 2026 00:40:15 +0000 (0:00:01.614) 0:00:07.018 ****** 2026-01-01 00:40:29.776723 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:40:29.776736 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:40:29.776749 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:40:29.776762 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:40:29.776774 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:40:29.776788 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:40:29.776801 | orchestrator | 2026-01-01 00:40:29.776814 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-01 00:40:29.776826 | orchestrator | Thursday 01 January 2026 00:40:19 +0000 (0:00:03.846) 0:00:10.865 ****** 2026-01-01 00:40:29.776840 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:40:29.776853 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:40:29.776866 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:40:29.776879 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:40:29.776892 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:40:29.776904 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:40:29.776917 | orchestrator | 2026-01-01 00:40:29.776928 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-01 00:40:29.776939 | orchestrator | 2026-01-01 00:40:29.776951 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-01 00:40:29.776961 | orchestrator | Thursday 01 January 2026 00:40:19 +0000 (0:00:00.754) 0:00:11.619 ****** 2026-01-01 00:40:29.776972 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:40:29.776983 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:40:29.777003 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:40:29.777014 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:40:29.777025 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:40:29.777055 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:40:29.777066 | orchestrator | changed: [testbed-manager] 2026-01-01 00:40:29.777077 | orchestrator | 2026-01-01 00:40:29.777088 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-01 00:40:29.777100 | orchestrator | Thursday 01 January 2026 00:40:21 +0000 (0:00:01.730) 0:00:13.350 ****** 2026-01-01 00:40:29.777111 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:40:29.777122 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:40:29.777133 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:40:29.777143 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:40:29.777155 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:40:29.777165 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:40:29.777195 | orchestrator | changed: [testbed-manager] 2026-01-01 00:40:29.777207 | orchestrator | 2026-01-01 00:40:29.777218 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-01 00:40:29.777229 | orchestrator | Thursday 01 January 2026 00:40:23 +0000 (0:00:01.576) 0:00:14.927 ****** 2026-01-01 00:40:29.777240 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:40:29.777251 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:40:29.777262 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:40:29.777273 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:40:29.777284 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:40:29.777294 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:40:29.777305 | orchestrator | ok: [testbed-manager] 2026-01-01 00:40:29.777316 | orchestrator | 2026-01-01 00:40:29.777327 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-01 00:40:29.777338 | orchestrator | Thursday 01 January 2026 00:40:24 +0000 (0:00:01.437) 0:00:16.364 ****** 2026-01-01 00:40:29.777349 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:40:29.777360 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:40:29.777371 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:40:29.777382 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:40:29.777393 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:40:29.777404 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:40:29.777414 | orchestrator | changed: [testbed-manager] 2026-01-01 00:40:29.777425 | orchestrator | 2026-01-01 00:40:29.777436 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-01 00:40:29.777447 | orchestrator | Thursday 01 January 2026 00:40:26 +0000 (0:00:01.714) 0:00:18.078 ****** 2026-01-01 00:40:29.777457 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:40:29.777468 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:40:29.777479 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:40:29.777489 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:40:29.777500 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:40:29.777511 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:40:29.777522 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:40:29.777532 | orchestrator | 2026-01-01 00:40:29.777543 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-01 00:40:29.777554 | orchestrator | 2026-01-01 00:40:29.777565 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-01 00:40:29.777576 | orchestrator | Thursday 01 January 2026 00:40:26 +0000 (0:00:00.594) 0:00:18.673 ****** 2026-01-01 00:40:29.777587 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:40:29.777597 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:40:29.777608 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:40:29.777619 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:40:29.777630 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:40:29.777641 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:40:29.777652 | orchestrator | ok: [testbed-manager] 2026-01-01 00:40:29.777683 | orchestrator | 2026-01-01 00:40:29.777695 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:40:29.777715 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:40:29.777727 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:29.777738 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:29.777749 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:29.777760 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:29.777771 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:29.777782 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:29.777793 | orchestrator | 2026-01-01 00:40:29.777804 | orchestrator | 2026-01-01 00:40:29.777815 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:40:29.777826 | orchestrator | Thursday 01 January 2026 00:40:29 +0000 (0:00:02.881) 0:00:21.554 ****** 2026-01-01 00:40:29.777837 | orchestrator | =============================================================================== 2026-01-01 00:40:29.777847 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.85s 2026-01-01 00:40:29.777858 | orchestrator | Install python3-docker -------------------------------------------------- 2.88s 2026-01-01 00:40:29.777869 | orchestrator | Apply netplan configuration --------------------------------------------- 2.59s 2026-01-01 00:40:29.777880 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2026-01-01 00:40:29.777890 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.73s 2026-01-01 00:40:29.777906 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.71s 2026-01-01 00:40:29.777918 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.61s 2026-01-01 00:40:29.777928 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.58s 2026-01-01 00:40:29.777939 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.44s 2026-01-01 00:40:29.777950 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.88s 2026-01-01 00:40:29.777961 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2026-01-01 00:40:29.777978 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.59s 2026-01-01 00:40:30.532127 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-01 00:40:42.708105 | orchestrator | 2026-01-01 00:40:42 | INFO  | Task 74c9ce8d-86fe-4408-8035-55fbd544f92e (reboot) was prepared for execution. 2026-01-01 00:40:42.708219 | orchestrator | 2026-01-01 00:40:42 | INFO  | It takes a moment until task 74c9ce8d-86fe-4408-8035-55fbd544f92e (reboot) has been started and output is visible here. 2026-01-01 00:40:53.336821 | orchestrator | 2026-01-01 00:40:53.336969 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:40:53.337000 | orchestrator | 2026-01-01 00:40:53.337019 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:40:53.337041 | orchestrator | Thursday 01 January 2026 00:40:47 +0000 (0:00:00.215) 0:00:00.215 ****** 2026-01-01 00:40:53.337062 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:40:53.337086 | orchestrator | 2026-01-01 00:40:53.337099 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:40:53.337140 | orchestrator | Thursday 01 January 2026 00:40:47 +0000 (0:00:00.115) 0:00:00.330 ****** 2026-01-01 00:40:53.337153 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:40:53.337164 | orchestrator | 2026-01-01 00:40:53.337175 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:40:53.337187 | orchestrator | Thursday 01 January 2026 00:40:48 +0000 (0:00:00.985) 0:00:01.316 ****** 2026-01-01 00:40:53.337198 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:40:53.337208 | orchestrator | 2026-01-01 00:40:53.337220 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:40:53.337230 | orchestrator | 2026-01-01 00:40:53.337242 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:40:53.337254 | orchestrator | Thursday 01 January 2026 00:40:48 +0000 (0:00:00.131) 0:00:01.447 ****** 2026-01-01 00:40:53.337266 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:40:53.337280 | orchestrator | 2026-01-01 00:40:53.337293 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:40:53.337305 | orchestrator | Thursday 01 January 2026 00:40:48 +0000 (0:00:00.109) 0:00:01.557 ****** 2026-01-01 00:40:53.337318 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:40:53.337330 | orchestrator | 2026-01-01 00:40:53.337343 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:40:53.337356 | orchestrator | Thursday 01 January 2026 00:40:49 +0000 (0:00:00.675) 0:00:02.233 ****** 2026-01-01 00:40:53.337368 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:40:53.337381 | orchestrator | 2026-01-01 00:40:53.337394 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:40:53.337407 | orchestrator | 2026-01-01 00:40:53.337420 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:40:53.337433 | orchestrator | Thursday 01 January 2026 00:40:49 +0000 (0:00:00.119) 0:00:02.353 ****** 2026-01-01 00:40:53.337445 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:40:53.337458 | orchestrator | 2026-01-01 00:40:53.337470 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:40:53.337483 | orchestrator | Thursday 01 January 2026 00:40:49 +0000 (0:00:00.226) 0:00:02.580 ****** 2026-01-01 00:40:53.337496 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:40:53.337508 | orchestrator | 2026-01-01 00:40:53.337521 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:40:53.337534 | orchestrator | Thursday 01 January 2026 00:40:50 +0000 (0:00:00.682) 0:00:03.262 ****** 2026-01-01 00:40:53.337546 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:40:53.337559 | orchestrator | 2026-01-01 00:40:53.337572 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:40:53.337585 | orchestrator | 2026-01-01 00:40:53.337598 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:40:53.337612 | orchestrator | Thursday 01 January 2026 00:40:50 +0000 (0:00:00.120) 0:00:03.382 ****** 2026-01-01 00:40:53.337622 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:40:53.337633 | orchestrator | 2026-01-01 00:40:53.337644 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:40:53.337655 | orchestrator | Thursday 01 January 2026 00:40:50 +0000 (0:00:00.104) 0:00:03.487 ****** 2026-01-01 00:40:53.337666 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:40:53.337710 | orchestrator | 2026-01-01 00:40:53.337724 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:40:53.337735 | orchestrator | Thursday 01 January 2026 00:40:51 +0000 (0:00:00.655) 0:00:04.142 ****** 2026-01-01 00:40:53.337747 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:40:53.337758 | orchestrator | 2026-01-01 00:40:53.337769 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:40:53.337779 | orchestrator | 2026-01-01 00:40:53.337790 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:40:53.337810 | orchestrator | Thursday 01 January 2026 00:40:51 +0000 (0:00:00.120) 0:00:04.263 ****** 2026-01-01 00:40:53.337821 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:40:53.337832 | orchestrator | 2026-01-01 00:40:53.337848 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:40:53.337867 | orchestrator | Thursday 01 January 2026 00:40:51 +0000 (0:00:00.130) 0:00:04.393 ****** 2026-01-01 00:40:53.337904 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:40:53.337926 | orchestrator | 2026-01-01 00:40:53.337944 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:40:53.337963 | orchestrator | Thursday 01 January 2026 00:40:51 +0000 (0:00:00.700) 0:00:05.093 ****** 2026-01-01 00:40:53.337980 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:40:53.337999 | orchestrator | 2026-01-01 00:40:53.338098 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:40:53.338123 | orchestrator | 2026-01-01 00:40:53.338141 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:40:53.338160 | orchestrator | Thursday 01 January 2026 00:40:52 +0000 (0:00:00.124) 0:00:05.218 ****** 2026-01-01 00:40:53.338180 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:40:53.338199 | orchestrator | 2026-01-01 00:40:53.338218 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:40:53.338238 | orchestrator | Thursday 01 January 2026 00:40:52 +0000 (0:00:00.107) 0:00:05.325 ****** 2026-01-01 00:40:53.338257 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:40:53.338277 | orchestrator | 2026-01-01 00:40:53.338297 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:40:53.338318 | orchestrator | Thursday 01 January 2026 00:40:52 +0000 (0:00:00.697) 0:00:06.023 ****** 2026-01-01 00:40:53.338367 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:40:53.338390 | orchestrator | 2026-01-01 00:40:53.338410 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:40:53.338431 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:53.338453 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:53.338474 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:53.338494 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:53.338513 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:53.338533 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:53.338553 | orchestrator | 2026-01-01 00:40:53.338573 | orchestrator | 2026-01-01 00:40:53.338594 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:40:53.338614 | orchestrator | Thursday 01 January 2026 00:40:52 +0000 (0:00:00.045) 0:00:06.069 ****** 2026-01-01 00:40:53.338634 | orchestrator | =============================================================================== 2026-01-01 00:40:53.338654 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.40s 2026-01-01 00:40:53.338704 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.79s 2026-01-01 00:40:53.338726 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2026-01-01 00:40:53.711469 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-01 00:41:06.078792 | orchestrator | 2026-01-01 00:41:06 | INFO  | Task 0149669f-ae51-4db0-8444-50d28da5b965 (wait-for-connection) was prepared for execution. 2026-01-01 00:41:06.078941 | orchestrator | 2026-01-01 00:41:06 | INFO  | It takes a moment until task 0149669f-ae51-4db0-8444-50d28da5b965 (wait-for-connection) has been started and output is visible here. 2026-01-01 00:41:22.442770 | orchestrator | 2026-01-01 00:41:22.442891 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-01 00:41:22.442910 | orchestrator | 2026-01-01 00:41:22.442922 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-01 00:41:22.442936 | orchestrator | Thursday 01 January 2026 00:41:10 +0000 (0:00:00.246) 0:00:00.246 ****** 2026-01-01 00:41:22.442948 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:41:22.442961 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:41:22.442973 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:41:22.442984 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:41:22.442995 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:41:22.443006 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:41:22.443017 | orchestrator | 2026-01-01 00:41:22.443028 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:41:22.443040 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:41:22.443053 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:41:22.443064 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:41:22.443081 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:41:22.443099 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:41:22.443140 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:41:22.443160 | orchestrator | 2026-01-01 00:41:22.443179 | orchestrator | 2026-01-01 00:41:22.443198 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:41:22.443216 | orchestrator | Thursday 01 January 2026 00:41:22 +0000 (0:00:11.601) 0:00:11.847 ****** 2026-01-01 00:41:22.443235 | orchestrator | =============================================================================== 2026-01-01 00:41:22.443256 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.60s 2026-01-01 00:41:22.783062 | orchestrator | + osism apply hddtemp 2026-01-01 00:41:34.875552 | orchestrator | 2026-01-01 00:41:34 | INFO  | Task fc37a79e-90e5-472c-b5f7-27b507a958de (hddtemp) was prepared for execution. 2026-01-01 00:41:34.875715 | orchestrator | 2026-01-01 00:41:34 | INFO  | It takes a moment until task fc37a79e-90e5-472c-b5f7-27b507a958de (hddtemp) has been started and output is visible here. 2026-01-01 00:42:03.502839 | orchestrator | 2026-01-01 00:42:03.502955 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-01 00:42:03.502973 | orchestrator | 2026-01-01 00:42:03.502985 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-01 00:42:03.502997 | orchestrator | Thursday 01 January 2026 00:41:39 +0000 (0:00:00.268) 0:00:00.268 ****** 2026-01-01 00:42:03.503010 | orchestrator | ok: [testbed-manager] 2026-01-01 00:42:03.503023 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:42:03.503035 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:42:03.503047 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:42:03.503059 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:42:03.503070 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:42:03.503081 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:42:03.503093 | orchestrator | 2026-01-01 00:42:03.503104 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-01 00:42:03.503138 | orchestrator | Thursday 01 January 2026 00:41:40 +0000 (0:00:00.898) 0:00:01.166 ****** 2026-01-01 00:42:03.503153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:42:03.503167 | orchestrator | 2026-01-01 00:42:03.503178 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-01 00:42:03.503189 | orchestrator | Thursday 01 January 2026 00:41:41 +0000 (0:00:01.412) 0:00:02.579 ****** 2026-01-01 00:42:03.503200 | orchestrator | ok: [testbed-manager] 2026-01-01 00:42:03.503212 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:42:03.503223 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:42:03.503233 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:42:03.503244 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:42:03.503255 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:42:03.503266 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:42:03.503277 | orchestrator | 2026-01-01 00:42:03.503288 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-01 00:42:03.503299 | orchestrator | Thursday 01 January 2026 00:41:43 +0000 (0:00:02.178) 0:00:04.757 ****** 2026-01-01 00:42:03.503310 | orchestrator | changed: [testbed-manager] 2026-01-01 00:42:03.503322 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:42:03.503333 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:42:03.503344 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:42:03.503355 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:42:03.503366 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:42:03.503376 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:42:03.503387 | orchestrator | 2026-01-01 00:42:03.503398 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-01 00:42:03.503409 | orchestrator | Thursday 01 January 2026 00:41:44 +0000 (0:00:01.081) 0:00:05.839 ****** 2026-01-01 00:42:03.503420 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:42:03.503431 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:42:03.503442 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:42:03.503453 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:42:03.503464 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:42:03.503474 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:42:03.503485 | orchestrator | ok: [testbed-manager] 2026-01-01 00:42:03.503496 | orchestrator | 2026-01-01 00:42:03.503507 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-01 00:42:03.503518 | orchestrator | Thursday 01 January 2026 00:41:45 +0000 (0:00:01.102) 0:00:06.942 ****** 2026-01-01 00:42:03.503529 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:42:03.503540 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:42:03.503551 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:42:03.503562 | orchestrator | changed: [testbed-manager] 2026-01-01 00:42:03.503573 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:03.503584 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:42:03.503595 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:42:03.503606 | orchestrator | 2026-01-01 00:42:03.503617 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-01 00:42:03.503628 | orchestrator | Thursday 01 January 2026 00:41:46 +0000 (0:00:00.797) 0:00:07.739 ****** 2026-01-01 00:42:03.503639 | orchestrator | changed: [testbed-manager] 2026-01-01 00:42:03.503650 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:42:03.503661 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:42:03.503672 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:42:03.503683 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:42:03.503762 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:42:03.503778 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:42:03.503790 | orchestrator | 2026-01-01 00:42:03.503801 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-01 00:42:03.503812 | orchestrator | Thursday 01 January 2026 00:42:00 +0000 (0:00:13.567) 0:00:21.307 ****** 2026-01-01 00:42:03.503848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:42:03.503860 | orchestrator | 2026-01-01 00:42:03.503871 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-01 00:42:03.503882 | orchestrator | Thursday 01 January 2026 00:42:01 +0000 (0:00:01.139) 0:00:22.446 ****** 2026-01-01 00:42:03.503893 | orchestrator | changed: [testbed-manager] 2026-01-01 00:42:03.503904 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:42:03.503915 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:42:03.503926 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:42:03.503937 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:42:03.503948 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:42:03.503959 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:42:03.503970 | orchestrator | 2026-01-01 00:42:03.503981 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:42:03.503993 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:42:03.504037 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:42:03.504051 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:42:03.504062 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:42:03.504073 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:42:03.504084 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:42:03.504095 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:42:03.504108 | orchestrator | 2026-01-01 00:42:03.504127 | orchestrator | 2026-01-01 00:42:03.504143 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:42:03.504163 | orchestrator | Thursday 01 January 2026 00:42:03 +0000 (0:00:01.768) 0:00:24.215 ****** 2026-01-01 00:42:03.504181 | orchestrator | =============================================================================== 2026-01-01 00:42:03.504200 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.57s 2026-01-01 00:42:03.504212 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.18s 2026-01-01 00:42:03.504223 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.77s 2026-01-01 00:42:03.504233 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.41s 2026-01-01 00:42:03.504244 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.14s 2026-01-01 00:42:03.504255 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.10s 2026-01-01 00:42:03.504265 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.08s 2026-01-01 00:42:03.504276 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.90s 2026-01-01 00:42:03.504287 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.80s 2026-01-01 00:42:03.744086 | orchestrator | ++ semver latest 7.1.1 2026-01-01 00:42:03.793631 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-01 00:42:03.793775 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-01 00:42:03.793793 | orchestrator | + sudo systemctl restart manager.service 2026-01-01 00:42:17.669516 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-01 00:42:17.669631 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-01 00:42:17.669650 | orchestrator | + local max_attempts=60 2026-01-01 00:42:17.669663 | orchestrator | + local name=ceph-ansible 2026-01-01 00:42:17.669675 | orchestrator | + local attempt_num=1 2026-01-01 00:42:17.669687 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:42:17.711196 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:42:17.711297 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:42:17.711313 | orchestrator | + sleep 5 2026-01-01 00:42:22.719716 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:42:22.749812 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:42:22.749903 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:42:22.749913 | orchestrator | + sleep 5 2026-01-01 00:42:27.753914 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:42:27.791264 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:42:27.791366 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:42:27.791382 | orchestrator | + sleep 5 2026-01-01 00:42:32.795844 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:42:32.836131 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:42:32.836242 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:42:32.836259 | orchestrator | + sleep 5 2026-01-01 00:42:37.840317 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:42:37.877788 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:42:37.877921 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:42:37.877948 | orchestrator | + sleep 5 2026-01-01 00:42:42.882318 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:42:42.922682 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:42:42.922788 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:42:42.922795 | orchestrator | + sleep 5 2026-01-01 00:42:47.928653 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:42:47.970308 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:42:47.970407 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:42:47.970421 | orchestrator | + sleep 5 2026-01-01 00:42:52.974197 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:42:53.026671 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:42:53.026772 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:42:53.026787 | orchestrator | + sleep 5 2026-01-01 00:42:58.029239 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:42:58.089869 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:42:58.089973 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:42:58.089990 | orchestrator | + sleep 5 2026-01-01 00:43:03.093912 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:43:03.134242 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:43:03.134475 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:43:03.134565 | orchestrator | + sleep 5 2026-01-01 00:43:08.138098 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:43:08.171522 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:43:08.171628 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:43:08.171654 | orchestrator | + sleep 5 2026-01-01 00:43:13.176150 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:43:13.215574 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:43:13.215670 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:43:13.215687 | orchestrator | + sleep 5 2026-01-01 00:43:18.219536 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:43:18.257753 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:43:18.257859 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:43:18.257877 | orchestrator | + sleep 5 2026-01-01 00:43:23.261984 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:43:23.300748 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:43:23.300826 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-01 00:43:23.300834 | orchestrator | + local max_attempts=60 2026-01-01 00:43:23.300864 | orchestrator | + local name=kolla-ansible 2026-01-01 00:43:23.300870 | orchestrator | + local attempt_num=1 2026-01-01 00:43:23.300875 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-01 00:43:23.345897 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:43:23.345993 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-01 00:43:23.346008 | orchestrator | + local max_attempts=60 2026-01-01 00:43:23.346075 | orchestrator | + local name=osism-ansible 2026-01-01 00:43:23.346087 | orchestrator | + local attempt_num=1 2026-01-01 00:43:23.347120 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-01 00:43:23.382486 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:43:23.382574 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-01 00:43:23.382589 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-01 00:43:23.555244 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-01 00:43:23.723439 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-01 00:43:23.886270 | orchestrator | ARA in osism-ansible already disabled. 2026-01-01 00:43:24.116263 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-01 00:43:24.117269 | orchestrator | + osism apply gather-facts 2026-01-01 00:43:36.579362 | orchestrator | 2026-01-01 00:43:36 | INFO  | Task 91e25aad-4772-4b0d-be4b-999960ff70ea (gather-facts) was prepared for execution. 2026-01-01 00:43:36.579488 | orchestrator | 2026-01-01 00:43:36 | INFO  | It takes a moment until task 91e25aad-4772-4b0d-be4b-999960ff70ea (gather-facts) has been started and output is visible here. 2026-01-01 00:43:50.605808 | orchestrator | 2026-01-01 00:43:50.606005 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-01 00:43:50.606859 | orchestrator | 2026-01-01 00:43:50.606890 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:43:50.606903 | orchestrator | Thursday 01 January 2026 00:43:41 +0000 (0:00:00.231) 0:00:00.231 ****** 2026-01-01 00:43:50.606915 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:43:50.606929 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:43:50.606941 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:43:50.606952 | orchestrator | ok: [testbed-manager] 2026-01-01 00:43:50.606964 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:43:50.606975 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:43:50.606987 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:43:50.606998 | orchestrator | 2026-01-01 00:43:50.607009 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-01 00:43:50.607021 | orchestrator | 2026-01-01 00:43:50.607032 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-01 00:43:50.607043 | orchestrator | Thursday 01 January 2026 00:43:49 +0000 (0:00:08.549) 0:00:08.780 ****** 2026-01-01 00:43:50.607054 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:43:50.607067 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:43:50.607078 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:43:50.607089 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:43:50.607100 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:50.607111 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:50.607122 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:43:50.607133 | orchestrator | 2026-01-01 00:43:50.607144 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:43:50.607156 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:43:50.607169 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:43:50.607180 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:43:50.607191 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:43:50.607236 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:43:50.607248 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:43:50.607259 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:43:50.607270 | orchestrator | 2026-01-01 00:43:50.607281 | orchestrator | 2026-01-01 00:43:50.607292 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:43:50.607303 | orchestrator | Thursday 01 January 2026 00:43:50 +0000 (0:00:00.542) 0:00:09.324 ****** 2026-01-01 00:43:50.607314 | orchestrator | =============================================================================== 2026-01-01 00:43:50.607325 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.55s 2026-01-01 00:43:50.607336 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-01-01 00:43:50.975412 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-01 00:43:50.993061 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-01 00:43:51.006076 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-01 00:43:51.019344 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-01 00:43:51.035481 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-01 00:43:51.048068 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-01 00:43:51.065948 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-01 00:43:51.085380 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-01 00:43:51.105621 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-01 00:43:51.122262 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-01 00:43:51.137269 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-01 00:43:51.159030 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-01 00:43:51.177023 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-01 00:43:51.191391 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-01 00:43:51.206286 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-01 00:43:51.221690 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-01 00:43:51.238220 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-01 00:43:51.257665 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-01 00:43:51.274461 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-01 00:43:51.294502 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-01 00:43:51.309265 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-01 00:43:51.421542 | orchestrator | ok: Runtime: 0:24:44.447351 2026-01-01 00:43:51.525445 | 2026-01-01 00:43:51.525629 | TASK [Deploy services] 2026-01-01 00:43:52.059664 | orchestrator | skipping: Conditional result was False 2026-01-01 00:43:52.069347 | 2026-01-01 00:43:52.069494 | TASK [Deploy in a nutshell] 2026-01-01 00:43:52.819262 | orchestrator | + set -e 2026-01-01 00:43:52.819480 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-01 00:43:52.819507 | orchestrator | ++ export INTERACTIVE=false 2026-01-01 00:43:52.819529 | orchestrator | ++ INTERACTIVE=false 2026-01-01 00:43:52.819543 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-01 00:43:52.819556 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-01 00:43:52.819570 | orchestrator | + source /opt/manager-vars.sh 2026-01-01 00:43:52.819615 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-01 00:43:52.819645 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-01 00:43:52.819660 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-01 00:43:52.819675 | orchestrator | ++ CEPH_VERSION=reef 2026-01-01 00:43:52.819687 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-01 00:43:52.819751 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-01 00:43:52.819775 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-01 00:43:52.819805 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-01 00:43:52.819832 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-01 00:43:52.819848 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-01 00:43:52.819859 | orchestrator | ++ export ARA=false 2026-01-01 00:43:52.819871 | orchestrator | ++ ARA=false 2026-01-01 00:43:52.819882 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-01 00:43:52.819894 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-01 00:43:52.819905 | orchestrator | ++ export TEMPEST=true 2026-01-01 00:43:52.819916 | orchestrator | ++ TEMPEST=true 2026-01-01 00:43:52.819926 | orchestrator | ++ export IS_ZUUL=true 2026-01-01 00:43:52.819937 | orchestrator | ++ IS_ZUUL=true 2026-01-01 00:43:52.819948 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-01-01 00:43:52.819960 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-01-01 00:43:52.819970 | orchestrator | ++ export EXTERNAL_API=false 2026-01-01 00:43:52.819981 | orchestrator | ++ EXTERNAL_API=false 2026-01-01 00:43:52.819992 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-01 00:43:52.820003 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-01 00:43:52.820014 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-01 00:43:52.820024 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-01 00:43:52.820035 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-01 00:43:52.820047 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-01 00:43:52.820061 | orchestrator | 2026-01-01 00:43:52.820080 | orchestrator | # PULL IMAGES 2026-01-01 00:43:52.820099 | orchestrator | 2026-01-01 00:43:52.820117 | orchestrator | + echo 2026-01-01 00:43:52.820135 | orchestrator | + echo '# PULL IMAGES' 2026-01-01 00:43:52.820163 | orchestrator | + echo 2026-01-01 00:43:52.820798 | orchestrator | ++ semver latest 7.0.0 2026-01-01 00:43:52.878525 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-01 00:43:52.878619 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-01 00:43:52.878651 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-01 00:43:54.898212 | orchestrator | 2026-01-01 00:43:54 | INFO  | Trying to run play pull-images in environment custom 2026-01-01 00:44:05.083443 | orchestrator | 2026-01-01 00:44:05 | INFO  | Task f735c50b-f83b-43a0-b3af-8f05662b2deb (pull-images) was prepared for execution. 2026-01-01 00:44:05.083557 | orchestrator | 2026-01-01 00:44:05 | INFO  | Task f735c50b-f83b-43a0-b3af-8f05662b2deb is running in background. No more output. Check ARA for logs. 2026-01-01 00:44:07.346603 | orchestrator | 2026-01-01 00:44:07 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-01 00:44:17.492242 | orchestrator | 2026-01-01 00:44:17 | INFO  | Task 16f2761b-d2bd-4cb8-87d2-6d21fa3e650d (wipe-partitions) was prepared for execution. 2026-01-01 00:44:17.492339 | orchestrator | 2026-01-01 00:44:17 | INFO  | It takes a moment until task 16f2761b-d2bd-4cb8-87d2-6d21fa3e650d (wipe-partitions) has been started and output is visible here. 2026-01-01 00:44:30.641900 | orchestrator | 2026-01-01 00:44:30.641997 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-01 00:44:30.642007 | orchestrator | 2026-01-01 00:44:30.642073 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-01 00:44:30.642084 | orchestrator | Thursday 01 January 2026 00:44:21 +0000 (0:00:00.134) 0:00:00.134 ****** 2026-01-01 00:44:30.642093 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:44:30.642099 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:44:30.642105 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:44:30.642111 | orchestrator | 2026-01-01 00:44:30.642116 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-01 00:44:30.642144 | orchestrator | Thursday 01 January 2026 00:44:22 +0000 (0:00:00.592) 0:00:00.726 ****** 2026-01-01 00:44:30.642150 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:44:30.642156 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:44:30.642165 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:30.642171 | orchestrator | 2026-01-01 00:44:30.642176 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-01 00:44:30.642181 | orchestrator | Thursday 01 January 2026 00:44:23 +0000 (0:00:00.566) 0:00:01.293 ****** 2026-01-01 00:44:30.642187 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:44:30.642193 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:30.642198 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:44:30.642203 | orchestrator | 2026-01-01 00:44:30.642209 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-01 00:44:30.642214 | orchestrator | Thursday 01 January 2026 00:44:23 +0000 (0:00:00.616) 0:00:01.909 ****** 2026-01-01 00:44:30.642219 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:44:30.642224 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:44:30.642230 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:30.642235 | orchestrator | 2026-01-01 00:44:30.642240 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-01 00:44:30.642248 | orchestrator | Thursday 01 January 2026 00:44:23 +0000 (0:00:00.270) 0:00:02.180 ****** 2026-01-01 00:44:30.642257 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-01 00:44:30.642268 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-01 00:44:30.642277 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-01 00:44:30.642285 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-01 00:44:30.642294 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-01 00:44:30.642301 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-01 00:44:30.642310 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-01 00:44:30.642318 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-01 00:44:30.642326 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-01 00:44:30.642334 | orchestrator | 2026-01-01 00:44:30.642342 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-01 00:44:30.642350 | orchestrator | Thursday 01 January 2026 00:44:25 +0000 (0:00:01.230) 0:00:03.411 ****** 2026-01-01 00:44:30.642359 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-01 00:44:30.642368 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-01 00:44:30.642377 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-01 00:44:30.642387 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-01 00:44:30.642395 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-01 00:44:30.642403 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-01 00:44:30.642410 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-01 00:44:30.642415 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-01 00:44:30.642422 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-01 00:44:30.642431 | orchestrator | 2026-01-01 00:44:30.642439 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-01 00:44:30.642448 | orchestrator | Thursday 01 January 2026 00:44:26 +0000 (0:00:01.605) 0:00:05.016 ****** 2026-01-01 00:44:30.642456 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-01 00:44:30.642464 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-01 00:44:30.642472 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-01 00:44:30.642482 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-01 00:44:30.642491 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-01 00:44:30.642506 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-01 00:44:30.642516 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-01 00:44:30.642533 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-01 00:44:30.642542 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-01 00:44:30.642549 | orchestrator | 2026-01-01 00:44:30.642555 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-01 00:44:30.642561 | orchestrator | Thursday 01 January 2026 00:44:28 +0000 (0:00:02.153) 0:00:07.170 ****** 2026-01-01 00:44:30.642567 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:44:30.642573 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:44:30.642578 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:44:30.642584 | orchestrator | 2026-01-01 00:44:30.642590 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-01 00:44:30.642596 | orchestrator | Thursday 01 January 2026 00:44:29 +0000 (0:00:00.605) 0:00:07.775 ****** 2026-01-01 00:44:30.642602 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:44:30.642607 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:44:30.642614 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:44:30.642620 | orchestrator | 2026-01-01 00:44:30.642625 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:44:30.642633 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:30.642640 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:30.642660 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:30.642665 | orchestrator | 2026-01-01 00:44:30.642670 | orchestrator | 2026-01-01 00:44:30.642676 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:44:30.642681 | orchestrator | Thursday 01 January 2026 00:44:30 +0000 (0:00:00.632) 0:00:08.408 ****** 2026-01-01 00:44:30.642686 | orchestrator | =============================================================================== 2026-01-01 00:44:30.642691 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.15s 2026-01-01 00:44:30.642696 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.61s 2026-01-01 00:44:30.642702 | orchestrator | Check device availability ----------------------------------------------- 1.23s 2026-01-01 00:44:30.642728 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2026-01-01 00:44:30.642735 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.62s 2026-01-01 00:44:30.642740 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2026-01-01 00:44:30.642745 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-01-01 00:44:30.642750 | orchestrator | Remove all rook related logical devices --------------------------------- 0.57s 2026-01-01 00:44:30.642756 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2026-01-01 00:44:43.237286 | orchestrator | 2026-01-01 00:44:43 | INFO  | Task 0d3fc18c-307e-4fd9-a2c5-511e8ca0bc5f (facts) was prepared for execution. 2026-01-01 00:44:43.237404 | orchestrator | 2026-01-01 00:44:43 | INFO  | It takes a moment until task 0d3fc18c-307e-4fd9-a2c5-511e8ca0bc5f (facts) has been started and output is visible here. 2026-01-01 00:44:56.015205 | orchestrator | 2026-01-01 00:44:56.015356 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-01 00:44:56.015375 | orchestrator | 2026-01-01 00:44:56.015385 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-01 00:44:56.015394 | orchestrator | Thursday 01 January 2026 00:44:47 +0000 (0:00:00.262) 0:00:00.262 ****** 2026-01-01 00:44:56.015403 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:44:56.015412 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:44:56.015420 | orchestrator | ok: [testbed-manager] 2026-01-01 00:44:56.015454 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:44:56.015463 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:44:56.015471 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:44:56.015479 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:56.015487 | orchestrator | 2026-01-01 00:44:56.015497 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-01 00:44:56.015505 | orchestrator | Thursday 01 January 2026 00:44:48 +0000 (0:00:01.160) 0:00:01.422 ****** 2026-01-01 00:44:56.015513 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:44:56.015522 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:44:56.015530 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:44:56.015538 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:44:56.015546 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:44:56.015554 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:44:56.015562 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:56.015570 | orchestrator | 2026-01-01 00:44:56.015578 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-01 00:44:56.015586 | orchestrator | 2026-01-01 00:44:56.015594 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:44:56.015602 | orchestrator | Thursday 01 January 2026 00:44:50 +0000 (0:00:01.354) 0:00:02.777 ****** 2026-01-01 00:44:56.015610 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:44:56.015618 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:44:56.015627 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:44:56.015635 | orchestrator | ok: [testbed-manager] 2026-01-01 00:44:56.015643 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:44:56.015651 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:56.015658 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:44:56.015666 | orchestrator | 2026-01-01 00:44:56.015674 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-01 00:44:56.015682 | orchestrator | 2026-01-01 00:44:56.015690 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-01 00:44:56.015738 | orchestrator | Thursday 01 January 2026 00:44:55 +0000 (0:00:04.871) 0:00:07.649 ****** 2026-01-01 00:44:56.015749 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:44:56.015759 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:44:56.015768 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:44:56.015778 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:44:56.015788 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:44:56.015797 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:44:56.015806 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:56.015816 | orchestrator | 2026-01-01 00:44:56.015826 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:44:56.015836 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:56.015847 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:56.015856 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:56.015866 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:56.015875 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:56.015885 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:56.015895 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:56.015904 | orchestrator | 2026-01-01 00:44:56.015942 | orchestrator | 2026-01-01 00:44:56.015952 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:44:56.015969 | orchestrator | Thursday 01 January 2026 00:44:55 +0000 (0:00:00.544) 0:00:08.194 ****** 2026-01-01 00:44:56.015979 | orchestrator | =============================================================================== 2026-01-01 00:44:56.015989 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.87s 2026-01-01 00:44:56.015999 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.35s 2026-01-01 00:44:56.016009 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.16s 2026-01-01 00:44:56.016019 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-01-01 00:44:58.555876 | orchestrator | 2026-01-01 00:44:58 | INFO  | Task 8f30d5d9-0d98-4a5e-a459-51e5428106f4 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-01 00:44:58.556011 | orchestrator | 2026-01-01 00:44:58 | INFO  | It takes a moment until task 8f30d5d9-0d98-4a5e-a459-51e5428106f4 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-01 00:45:10.979392 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-01 00:45:10.979526 | orchestrator | 2.16.14 2026-01-01 00:45:10.979541 | orchestrator | 2026-01-01 00:45:10.979548 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-01 00:45:10.979556 | orchestrator | 2026-01-01 00:45:10.979564 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:45:10.979570 | orchestrator | Thursday 01 January 2026 00:45:03 +0000 (0:00:00.343) 0:00:00.343 ****** 2026-01-01 00:45:10.979577 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-01 00:45:10.979583 | orchestrator | 2026-01-01 00:45:10.979589 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:45:10.979649 | orchestrator | Thursday 01 January 2026 00:45:03 +0000 (0:00:00.283) 0:00:00.627 ****** 2026-01-01 00:45:10.979657 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:45:10.979664 | orchestrator | 2026-01-01 00:45:10.979670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.979676 | orchestrator | Thursday 01 January 2026 00:45:03 +0000 (0:00:00.308) 0:00:00.935 ****** 2026-01-01 00:45:10.979683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-01 00:45:10.979689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-01 00:45:10.979696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-01 00:45:10.979701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-01 00:45:10.979707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-01 00:45:10.979712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-01 00:45:10.979767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-01 00:45:10.979774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-01 00:45:10.979779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-01 00:45:10.979785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-01 00:45:10.979798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-01 00:45:10.979804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-01 00:45:10.979809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-01 00:45:10.979815 | orchestrator | 2026-01-01 00:45:10.979820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.979841 | orchestrator | Thursday 01 January 2026 00:45:04 +0000 (0:00:00.514) 0:00:01.450 ****** 2026-01-01 00:45:10.979846 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.979852 | orchestrator | 2026-01-01 00:45:10.979858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.979863 | orchestrator | Thursday 01 January 2026 00:45:04 +0000 (0:00:00.204) 0:00:01.654 ****** 2026-01-01 00:45:10.979869 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.979874 | orchestrator | 2026-01-01 00:45:10.979880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.979885 | orchestrator | Thursday 01 January 2026 00:45:04 +0000 (0:00:00.207) 0:00:01.861 ****** 2026-01-01 00:45:10.979890 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.979896 | orchestrator | 2026-01-01 00:45:10.979901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.979914 | orchestrator | Thursday 01 January 2026 00:45:04 +0000 (0:00:00.200) 0:00:02.062 ****** 2026-01-01 00:45:10.979923 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.979932 | orchestrator | 2026-01-01 00:45:10.979940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.979949 | orchestrator | Thursday 01 January 2026 00:45:05 +0000 (0:00:00.220) 0:00:02.283 ****** 2026-01-01 00:45:10.979958 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.979967 | orchestrator | 2026-01-01 00:45:10.979977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.979988 | orchestrator | Thursday 01 January 2026 00:45:05 +0000 (0:00:00.231) 0:00:02.515 ****** 2026-01-01 00:45:10.979997 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.980006 | orchestrator | 2026-01-01 00:45:10.980016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.980024 | orchestrator | Thursday 01 January 2026 00:45:05 +0000 (0:00:00.204) 0:00:02.720 ****** 2026-01-01 00:45:10.980033 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.980042 | orchestrator | 2026-01-01 00:45:10.980052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.980062 | orchestrator | Thursday 01 January 2026 00:45:05 +0000 (0:00:00.235) 0:00:02.956 ****** 2026-01-01 00:45:10.980071 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.980080 | orchestrator | 2026-01-01 00:45:10.980090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.980100 | orchestrator | Thursday 01 January 2026 00:45:06 +0000 (0:00:00.225) 0:00:03.181 ****** 2026-01-01 00:45:10.980109 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8) 2026-01-01 00:45:10.980117 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8) 2026-01-01 00:45:10.980123 | orchestrator | 2026-01-01 00:45:10.980129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.980155 | orchestrator | Thursday 01 January 2026 00:45:06 +0000 (0:00:00.430) 0:00:03.612 ****** 2026-01-01 00:45:10.980165 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb) 2026-01-01 00:45:10.980174 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb) 2026-01-01 00:45:10.980183 | orchestrator | 2026-01-01 00:45:10.980193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.980203 | orchestrator | Thursday 01 January 2026 00:45:07 +0000 (0:00:00.676) 0:00:04.289 ****** 2026-01-01 00:45:10.980212 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515) 2026-01-01 00:45:10.980221 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515) 2026-01-01 00:45:10.980231 | orchestrator | 2026-01-01 00:45:10.980241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.980261 | orchestrator | Thursday 01 January 2026 00:45:07 +0000 (0:00:00.647) 0:00:04.936 ****** 2026-01-01 00:45:10.980270 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf) 2026-01-01 00:45:10.980279 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf) 2026-01-01 00:45:10.980287 | orchestrator | 2026-01-01 00:45:10.980296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:10.980305 | orchestrator | Thursday 01 January 2026 00:45:08 +0000 (0:00:00.876) 0:00:05.813 ****** 2026-01-01 00:45:10.980312 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:45:10.980319 | orchestrator | 2026-01-01 00:45:10.980332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:10.980340 | orchestrator | Thursday 01 January 2026 00:45:09 +0000 (0:00:00.366) 0:00:06.179 ****** 2026-01-01 00:45:10.980352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-01 00:45:10.980365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-01 00:45:10.980376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-01 00:45:10.980388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-01 00:45:10.980399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-01 00:45:10.980411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-01 00:45:10.980424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-01 00:45:10.980436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-01 00:45:10.980449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-01 00:45:10.980461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-01 00:45:10.980473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-01 00:45:10.980486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-01 00:45:10.980497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-01 00:45:10.980510 | orchestrator | 2026-01-01 00:45:10.980517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:10.980525 | orchestrator | Thursday 01 January 2026 00:45:09 +0000 (0:00:00.460) 0:00:06.640 ****** 2026-01-01 00:45:10.980532 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.980540 | orchestrator | 2026-01-01 00:45:10.980547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:10.980554 | orchestrator | Thursday 01 January 2026 00:45:09 +0000 (0:00:00.225) 0:00:06.866 ****** 2026-01-01 00:45:10.980561 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.980568 | orchestrator | 2026-01-01 00:45:10.980575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:10.980583 | orchestrator | Thursday 01 January 2026 00:45:09 +0000 (0:00:00.209) 0:00:07.075 ****** 2026-01-01 00:45:10.980590 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.980597 | orchestrator | 2026-01-01 00:45:10.980604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:10.980612 | orchestrator | Thursday 01 January 2026 00:45:10 +0000 (0:00:00.203) 0:00:07.279 ****** 2026-01-01 00:45:10.980619 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.980626 | orchestrator | 2026-01-01 00:45:10.980633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:10.980640 | orchestrator | Thursday 01 January 2026 00:45:10 +0000 (0:00:00.199) 0:00:07.479 ****** 2026-01-01 00:45:10.980654 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.980662 | orchestrator | 2026-01-01 00:45:10.980669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:10.980676 | orchestrator | Thursday 01 January 2026 00:45:10 +0000 (0:00:00.225) 0:00:07.704 ****** 2026-01-01 00:45:10.980683 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.980690 | orchestrator | 2026-01-01 00:45:10.980703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:10.980733 | orchestrator | Thursday 01 January 2026 00:45:10 +0000 (0:00:00.191) 0:00:07.896 ****** 2026-01-01 00:45:10.980747 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:10.980759 | orchestrator | 2026-01-01 00:45:10.980778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:18.901336 | orchestrator | Thursday 01 January 2026 00:45:10 +0000 (0:00:00.195) 0:00:08.091 ****** 2026-01-01 00:45:18.901453 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.901470 | orchestrator | 2026-01-01 00:45:18.901483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:18.901496 | orchestrator | Thursday 01 January 2026 00:45:11 +0000 (0:00:00.227) 0:00:08.319 ****** 2026-01-01 00:45:18.901507 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-01 00:45:18.901519 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-01 00:45:18.901530 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-01 00:45:18.901541 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-01 00:45:18.901552 | orchestrator | 2026-01-01 00:45:18.901563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:18.901574 | orchestrator | Thursday 01 January 2026 00:45:12 +0000 (0:00:01.066) 0:00:09.386 ****** 2026-01-01 00:45:18.901585 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.901596 | orchestrator | 2026-01-01 00:45:18.901606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:18.901617 | orchestrator | Thursday 01 January 2026 00:45:12 +0000 (0:00:00.205) 0:00:09.591 ****** 2026-01-01 00:45:18.901628 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.901639 | orchestrator | 2026-01-01 00:45:18.901650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:18.901661 | orchestrator | Thursday 01 January 2026 00:45:12 +0000 (0:00:00.210) 0:00:09.802 ****** 2026-01-01 00:45:18.901671 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.901682 | orchestrator | 2026-01-01 00:45:18.901693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:18.901763 | orchestrator | Thursday 01 January 2026 00:45:12 +0000 (0:00:00.196) 0:00:09.999 ****** 2026-01-01 00:45:18.901781 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.901796 | orchestrator | 2026-01-01 00:45:18.901813 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-01 00:45:18.901828 | orchestrator | Thursday 01 January 2026 00:45:13 +0000 (0:00:00.217) 0:00:10.216 ****** 2026-01-01 00:45:18.901844 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-01 00:45:18.901861 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-01 00:45:18.901876 | orchestrator | 2026-01-01 00:45:18.901918 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-01 00:45:18.901938 | orchestrator | Thursday 01 January 2026 00:45:13 +0000 (0:00:00.176) 0:00:10.393 ****** 2026-01-01 00:45:18.901958 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.901978 | orchestrator | 2026-01-01 00:45:18.901998 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-01 00:45:18.902014 | orchestrator | Thursday 01 January 2026 00:45:13 +0000 (0:00:00.168) 0:00:10.561 ****** 2026-01-01 00:45:18.902092 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.902105 | orchestrator | 2026-01-01 00:45:18.902118 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-01 00:45:18.902174 | orchestrator | Thursday 01 January 2026 00:45:13 +0000 (0:00:00.140) 0:00:10.702 ****** 2026-01-01 00:45:18.902187 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.902200 | orchestrator | 2026-01-01 00:45:18.902214 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-01 00:45:18.902225 | orchestrator | Thursday 01 January 2026 00:45:13 +0000 (0:00:00.124) 0:00:10.826 ****** 2026-01-01 00:45:18.902236 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:45:18.902247 | orchestrator | 2026-01-01 00:45:18.902258 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-01 00:45:18.902269 | orchestrator | Thursday 01 January 2026 00:45:13 +0000 (0:00:00.151) 0:00:10.978 ****** 2026-01-01 00:45:18.902281 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd1d2d931-4931-5db0-ab2b-78163b321b95'}}) 2026-01-01 00:45:18.902292 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'}}) 2026-01-01 00:45:18.902303 | orchestrator | 2026-01-01 00:45:18.902314 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-01 00:45:18.902326 | orchestrator | Thursday 01 January 2026 00:45:14 +0000 (0:00:00.167) 0:00:11.145 ****** 2026-01-01 00:45:18.902338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd1d2d931-4931-5db0-ab2b-78163b321b95'}})  2026-01-01 00:45:18.902357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'}})  2026-01-01 00:45:18.902368 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.902379 | orchestrator | 2026-01-01 00:45:18.902390 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-01 00:45:18.902400 | orchestrator | Thursday 01 January 2026 00:45:14 +0000 (0:00:00.151) 0:00:11.296 ****** 2026-01-01 00:45:18.902411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd1d2d931-4931-5db0-ab2b-78163b321b95'}})  2026-01-01 00:45:18.902422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'}})  2026-01-01 00:45:18.902433 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.902444 | orchestrator | 2026-01-01 00:45:18.902455 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-01 00:45:18.902465 | orchestrator | Thursday 01 January 2026 00:45:14 +0000 (0:00:00.373) 0:00:11.670 ****** 2026-01-01 00:45:18.902476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd1d2d931-4931-5db0-ab2b-78163b321b95'}})  2026-01-01 00:45:18.902506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'}})  2026-01-01 00:45:18.902517 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.902528 | orchestrator | 2026-01-01 00:45:18.902539 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-01 00:45:18.902556 | orchestrator | Thursday 01 January 2026 00:45:14 +0000 (0:00:00.157) 0:00:11.828 ****** 2026-01-01 00:45:18.902567 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:45:18.902578 | orchestrator | 2026-01-01 00:45:18.902589 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-01 00:45:18.902600 | orchestrator | Thursday 01 January 2026 00:45:14 +0000 (0:00:00.156) 0:00:11.984 ****** 2026-01-01 00:45:18.902610 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:45:18.902621 | orchestrator | 2026-01-01 00:45:18.902632 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-01 00:45:18.902643 | orchestrator | Thursday 01 January 2026 00:45:15 +0000 (0:00:00.164) 0:00:12.148 ****** 2026-01-01 00:45:18.902653 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.902664 | orchestrator | 2026-01-01 00:45:18.902675 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-01 00:45:18.902686 | orchestrator | Thursday 01 January 2026 00:45:15 +0000 (0:00:00.149) 0:00:12.298 ****** 2026-01-01 00:45:18.902705 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.902716 | orchestrator | 2026-01-01 00:45:18.902763 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-01 00:45:18.902775 | orchestrator | Thursday 01 January 2026 00:45:15 +0000 (0:00:00.150) 0:00:12.449 ****** 2026-01-01 00:45:18.902786 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.902797 | orchestrator | 2026-01-01 00:45:18.902807 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-01 00:45:18.902818 | orchestrator | Thursday 01 January 2026 00:45:15 +0000 (0:00:00.135) 0:00:12.584 ****** 2026-01-01 00:45:18.902829 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:45:18.902840 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:45:18.902850 | orchestrator |  "sdb": { 2026-01-01 00:45:18.902861 | orchestrator |  "osd_lvm_uuid": "d1d2d931-4931-5db0-ab2b-78163b321b95" 2026-01-01 00:45:18.902872 | orchestrator |  }, 2026-01-01 00:45:18.902883 | orchestrator |  "sdc": { 2026-01-01 00:45:18.902893 | orchestrator |  "osd_lvm_uuid": "9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf" 2026-01-01 00:45:18.902904 | orchestrator |  } 2026-01-01 00:45:18.902915 | orchestrator |  } 2026-01-01 00:45:18.902925 | orchestrator | } 2026-01-01 00:45:18.902936 | orchestrator | 2026-01-01 00:45:18.902947 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-01 00:45:18.902958 | orchestrator | Thursday 01 January 2026 00:45:15 +0000 (0:00:00.161) 0:00:12.746 ****** 2026-01-01 00:45:18.902969 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.902979 | orchestrator | 2026-01-01 00:45:18.902990 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-01 00:45:18.903001 | orchestrator | Thursday 01 January 2026 00:45:15 +0000 (0:00:00.140) 0:00:12.887 ****** 2026-01-01 00:45:18.903012 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.903023 | orchestrator | 2026-01-01 00:45:18.903033 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-01 00:45:18.903044 | orchestrator | Thursday 01 January 2026 00:45:15 +0000 (0:00:00.157) 0:00:13.044 ****** 2026-01-01 00:45:18.903055 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:45:18.903065 | orchestrator | 2026-01-01 00:45:18.903076 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-01 00:45:18.903087 | orchestrator | Thursday 01 January 2026 00:45:16 +0000 (0:00:00.140) 0:00:13.185 ****** 2026-01-01 00:45:18.903097 | orchestrator | changed: [testbed-node-3] => { 2026-01-01 00:45:18.903108 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-01 00:45:18.903119 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:45:18.903130 | orchestrator |  "sdb": { 2026-01-01 00:45:18.903141 | orchestrator |  "osd_lvm_uuid": "d1d2d931-4931-5db0-ab2b-78163b321b95" 2026-01-01 00:45:18.903152 | orchestrator |  }, 2026-01-01 00:45:18.903162 | orchestrator |  "sdc": { 2026-01-01 00:45:18.903173 | orchestrator |  "osd_lvm_uuid": "9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf" 2026-01-01 00:45:18.903184 | orchestrator |  } 2026-01-01 00:45:18.903195 | orchestrator |  }, 2026-01-01 00:45:18.903205 | orchestrator |  "lvm_volumes": [ 2026-01-01 00:45:18.903216 | orchestrator |  { 2026-01-01 00:45:18.903227 | orchestrator |  "data": "osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95", 2026-01-01 00:45:18.903238 | orchestrator |  "data_vg": "ceph-d1d2d931-4931-5db0-ab2b-78163b321b95" 2026-01-01 00:45:18.903248 | orchestrator |  }, 2026-01-01 00:45:18.903259 | orchestrator |  { 2026-01-01 00:45:18.903271 | orchestrator |  "data": "osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf", 2026-01-01 00:45:18.903288 | orchestrator |  "data_vg": "ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf" 2026-01-01 00:45:18.903319 | orchestrator |  } 2026-01-01 00:45:18.903345 | orchestrator |  ] 2026-01-01 00:45:18.903362 | orchestrator |  } 2026-01-01 00:45:18.903393 | orchestrator | } 2026-01-01 00:45:18.903410 | orchestrator | 2026-01-01 00:45:18.903429 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-01 00:45:18.903449 | orchestrator | Thursday 01 January 2026 00:45:16 +0000 (0:00:00.451) 0:00:13.637 ****** 2026-01-01 00:45:18.903466 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-01 00:45:18.903485 | orchestrator | 2026-01-01 00:45:18.903503 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-01 00:45:18.903521 | orchestrator | 2026-01-01 00:45:18.903538 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:45:18.903555 | orchestrator | Thursday 01 January 2026 00:45:18 +0000 (0:00:01.846) 0:00:15.484 ****** 2026-01-01 00:45:18.903573 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-01 00:45:18.903591 | orchestrator | 2026-01-01 00:45:18.903611 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:45:18.903629 | orchestrator | Thursday 01 January 2026 00:45:18 +0000 (0:00:00.258) 0:00:15.743 ****** 2026-01-01 00:45:18.903648 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:45:18.903660 | orchestrator | 2026-01-01 00:45:18.903683 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417172 | orchestrator | Thursday 01 January 2026 00:45:18 +0000 (0:00:00.271) 0:00:16.014 ****** 2026-01-01 00:45:27.417276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-01 00:45:27.417286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-01 00:45:27.417293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-01 00:45:27.417300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-01 00:45:27.417306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-01 00:45:27.417313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-01 00:45:27.417319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-01 00:45:27.417325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-01 00:45:27.417331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-01 00:45:27.417337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-01 00:45:27.417343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-01 00:45:27.417352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-01 00:45:27.417358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-01 00:45:27.417364 | orchestrator | 2026-01-01 00:45:27.417371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417377 | orchestrator | Thursday 01 January 2026 00:45:19 +0000 (0:00:00.410) 0:00:16.425 ****** 2026-01-01 00:45:27.417383 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.417390 | orchestrator | 2026-01-01 00:45:27.417396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417402 | orchestrator | Thursday 01 January 2026 00:45:19 +0000 (0:00:00.255) 0:00:16.680 ****** 2026-01-01 00:45:27.417407 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.417413 | orchestrator | 2026-01-01 00:45:27.417419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417425 | orchestrator | Thursday 01 January 2026 00:45:19 +0000 (0:00:00.200) 0:00:16.881 ****** 2026-01-01 00:45:27.417431 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.417437 | orchestrator | 2026-01-01 00:45:27.417443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417482 | orchestrator | Thursday 01 January 2026 00:45:20 +0000 (0:00:00.239) 0:00:17.120 ****** 2026-01-01 00:45:27.417490 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.417496 | orchestrator | 2026-01-01 00:45:27.417502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417508 | orchestrator | Thursday 01 January 2026 00:45:20 +0000 (0:00:00.204) 0:00:17.324 ****** 2026-01-01 00:45:27.417514 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.417520 | orchestrator | 2026-01-01 00:45:27.417526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417532 | orchestrator | Thursday 01 January 2026 00:45:20 +0000 (0:00:00.643) 0:00:17.968 ****** 2026-01-01 00:45:27.417538 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.417544 | orchestrator | 2026-01-01 00:45:27.417564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417570 | orchestrator | Thursday 01 January 2026 00:45:21 +0000 (0:00:00.218) 0:00:18.187 ****** 2026-01-01 00:45:27.417576 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.417582 | orchestrator | 2026-01-01 00:45:27.417587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417593 | orchestrator | Thursday 01 January 2026 00:45:21 +0000 (0:00:00.206) 0:00:18.393 ****** 2026-01-01 00:45:27.417599 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.417605 | orchestrator | 2026-01-01 00:45:27.417611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417617 | orchestrator | Thursday 01 January 2026 00:45:21 +0000 (0:00:00.210) 0:00:18.604 ****** 2026-01-01 00:45:27.417622 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c) 2026-01-01 00:45:27.417630 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c) 2026-01-01 00:45:27.417636 | orchestrator | 2026-01-01 00:45:27.417642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417648 | orchestrator | Thursday 01 January 2026 00:45:21 +0000 (0:00:00.457) 0:00:19.061 ****** 2026-01-01 00:45:27.417654 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122) 2026-01-01 00:45:27.417659 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122) 2026-01-01 00:45:27.417665 | orchestrator | 2026-01-01 00:45:27.417671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417677 | orchestrator | Thursday 01 January 2026 00:45:22 +0000 (0:00:00.464) 0:00:19.526 ****** 2026-01-01 00:45:27.417683 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86) 2026-01-01 00:45:27.417689 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86) 2026-01-01 00:45:27.417695 | orchestrator | 2026-01-01 00:45:27.417701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417760 | orchestrator | Thursday 01 January 2026 00:45:22 +0000 (0:00:00.416) 0:00:19.943 ****** 2026-01-01 00:45:27.417768 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3) 2026-01-01 00:45:27.417774 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3) 2026-01-01 00:45:27.417781 | orchestrator | 2026-01-01 00:45:27.417786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:27.417792 | orchestrator | Thursday 01 January 2026 00:45:23 +0000 (0:00:00.463) 0:00:20.406 ****** 2026-01-01 00:45:27.417798 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:45:27.417804 | orchestrator | 2026-01-01 00:45:27.417810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:27.417816 | orchestrator | Thursday 01 January 2026 00:45:23 +0000 (0:00:00.340) 0:00:20.746 ****** 2026-01-01 00:45:27.417828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-01 00:45:27.417834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-01 00:45:27.417840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-01 00:45:27.417846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-01 00:45:27.417852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-01 00:45:27.417858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-01 00:45:27.417864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-01 00:45:27.417870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-01 00:45:27.417876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-01 00:45:27.417881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-01 00:45:27.417887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-01 00:45:27.417893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-01 00:45:27.417899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-01 00:45:27.417905 | orchestrator | 2026-01-01 00:45:27.417911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:27.417917 | orchestrator | Thursday 01 January 2026 00:45:24 +0000 (0:00:00.409) 0:00:21.156 ****** 2026-01-01 00:45:27.417923 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.417929 | orchestrator | 2026-01-01 00:45:27.417934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:27.417945 | orchestrator | Thursday 01 January 2026 00:45:24 +0000 (0:00:00.721) 0:00:21.878 ****** 2026-01-01 00:45:27.417951 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.417957 | orchestrator | 2026-01-01 00:45:27.417963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:27.417969 | orchestrator | Thursday 01 January 2026 00:45:24 +0000 (0:00:00.216) 0:00:22.095 ****** 2026-01-01 00:45:27.417975 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.417981 | orchestrator | 2026-01-01 00:45:27.417986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:27.417992 | orchestrator | Thursday 01 January 2026 00:45:25 +0000 (0:00:00.221) 0:00:22.316 ****** 2026-01-01 00:45:27.417998 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.418004 | orchestrator | 2026-01-01 00:45:27.418010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:27.418065 | orchestrator | Thursday 01 January 2026 00:45:25 +0000 (0:00:00.220) 0:00:22.537 ****** 2026-01-01 00:45:27.418071 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.418077 | orchestrator | 2026-01-01 00:45:27.418083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:27.418089 | orchestrator | Thursday 01 January 2026 00:45:25 +0000 (0:00:00.223) 0:00:22.760 ****** 2026-01-01 00:45:27.418095 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.418101 | orchestrator | 2026-01-01 00:45:27.418107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:27.418113 | orchestrator | Thursday 01 January 2026 00:45:25 +0000 (0:00:00.189) 0:00:22.950 ****** 2026-01-01 00:45:27.418119 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.418125 | orchestrator | 2026-01-01 00:45:27.418131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:27.418137 | orchestrator | Thursday 01 January 2026 00:45:26 +0000 (0:00:00.222) 0:00:23.173 ****** 2026-01-01 00:45:27.418147 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:27.418153 | orchestrator | 2026-01-01 00:45:27.418159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:27.418165 | orchestrator | Thursday 01 January 2026 00:45:26 +0000 (0:00:00.201) 0:00:23.374 ****** 2026-01-01 00:45:27.418171 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-01 00:45:27.418178 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-01 00:45:27.418184 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-01 00:45:27.418190 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-01 00:45:27.418196 | orchestrator | 2026-01-01 00:45:27.418202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:27.418208 | orchestrator | Thursday 01 January 2026 00:45:27 +0000 (0:00:00.940) 0:00:24.315 ****** 2026-01-01 00:45:27.418214 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.798406 | orchestrator | 2026-01-01 00:45:34.798509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:34.798521 | orchestrator | Thursday 01 January 2026 00:45:27 +0000 (0:00:00.214) 0:00:24.529 ****** 2026-01-01 00:45:34.798527 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.798535 | orchestrator | 2026-01-01 00:45:34.798542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:34.798548 | orchestrator | Thursday 01 January 2026 00:45:27 +0000 (0:00:00.232) 0:00:24.761 ****** 2026-01-01 00:45:34.798555 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.798561 | orchestrator | 2026-01-01 00:45:34.798567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:34.798573 | orchestrator | Thursday 01 January 2026 00:45:27 +0000 (0:00:00.194) 0:00:24.956 ****** 2026-01-01 00:45:34.798579 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.798585 | orchestrator | 2026-01-01 00:45:34.798591 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-01 00:45:34.798598 | orchestrator | Thursday 01 January 2026 00:45:28 +0000 (0:00:00.757) 0:00:25.714 ****** 2026-01-01 00:45:34.798605 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-01 00:45:34.798611 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-01 00:45:34.798617 | orchestrator | 2026-01-01 00:45:34.798622 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-01 00:45:34.798628 | orchestrator | Thursday 01 January 2026 00:45:28 +0000 (0:00:00.176) 0:00:25.891 ****** 2026-01-01 00:45:34.798635 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.798641 | orchestrator | 2026-01-01 00:45:34.798647 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-01 00:45:34.798653 | orchestrator | Thursday 01 January 2026 00:45:28 +0000 (0:00:00.147) 0:00:26.039 ****** 2026-01-01 00:45:34.798659 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.798665 | orchestrator | 2026-01-01 00:45:34.798671 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-01 00:45:34.798678 | orchestrator | Thursday 01 January 2026 00:45:29 +0000 (0:00:00.137) 0:00:26.177 ****** 2026-01-01 00:45:34.798684 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.798689 | orchestrator | 2026-01-01 00:45:34.798695 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-01 00:45:34.798701 | orchestrator | Thursday 01 January 2026 00:45:29 +0000 (0:00:00.135) 0:00:26.312 ****** 2026-01-01 00:45:34.798707 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:45:34.798715 | orchestrator | 2026-01-01 00:45:34.798743 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-01 00:45:34.798750 | orchestrator | Thursday 01 January 2026 00:45:29 +0000 (0:00:00.137) 0:00:26.450 ****** 2026-01-01 00:45:34.798758 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fb51feb3-1d35-54b5-8ae7-a768510de0c7'}}) 2026-01-01 00:45:34.798765 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'}}) 2026-01-01 00:45:34.798793 | orchestrator | 2026-01-01 00:45:34.798800 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-01 00:45:34.798806 | orchestrator | Thursday 01 January 2026 00:45:29 +0000 (0:00:00.299) 0:00:26.749 ****** 2026-01-01 00:45:34.798813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fb51feb3-1d35-54b5-8ae7-a768510de0c7'}})  2026-01-01 00:45:34.798839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'}})  2026-01-01 00:45:34.798846 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.798851 | orchestrator | 2026-01-01 00:45:34.798858 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-01 00:45:34.798865 | orchestrator | Thursday 01 January 2026 00:45:29 +0000 (0:00:00.151) 0:00:26.901 ****** 2026-01-01 00:45:34.798872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fb51feb3-1d35-54b5-8ae7-a768510de0c7'}})  2026-01-01 00:45:34.798879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'}})  2026-01-01 00:45:34.798886 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.798891 | orchestrator | 2026-01-01 00:45:34.798897 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-01 00:45:34.798903 | orchestrator | Thursday 01 January 2026 00:45:29 +0000 (0:00:00.186) 0:00:27.088 ****** 2026-01-01 00:45:34.798909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fb51feb3-1d35-54b5-8ae7-a768510de0c7'}})  2026-01-01 00:45:34.798916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'}})  2026-01-01 00:45:34.798921 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.798927 | orchestrator | 2026-01-01 00:45:34.798933 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-01 00:45:34.798938 | orchestrator | Thursday 01 January 2026 00:45:30 +0000 (0:00:00.159) 0:00:27.247 ****** 2026-01-01 00:45:34.798944 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:45:34.798949 | orchestrator | 2026-01-01 00:45:34.798955 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-01 00:45:34.798961 | orchestrator | Thursday 01 January 2026 00:45:30 +0000 (0:00:00.167) 0:00:27.415 ****** 2026-01-01 00:45:34.798967 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:45:34.798973 | orchestrator | 2026-01-01 00:45:34.798979 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-01 00:45:34.798985 | orchestrator | Thursday 01 January 2026 00:45:30 +0000 (0:00:00.161) 0:00:27.577 ****** 2026-01-01 00:45:34.799009 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.799016 | orchestrator | 2026-01-01 00:45:34.799022 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-01 00:45:34.799029 | orchestrator | Thursday 01 January 2026 00:45:30 +0000 (0:00:00.379) 0:00:27.956 ****** 2026-01-01 00:45:34.799035 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.799041 | orchestrator | 2026-01-01 00:45:34.799047 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-01 00:45:34.799054 | orchestrator | Thursday 01 January 2026 00:45:31 +0000 (0:00:00.170) 0:00:28.126 ****** 2026-01-01 00:45:34.799060 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.799066 | orchestrator | 2026-01-01 00:45:34.799070 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-01 00:45:34.799075 | orchestrator | Thursday 01 January 2026 00:45:31 +0000 (0:00:00.143) 0:00:28.269 ****** 2026-01-01 00:45:34.799080 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:45:34.799084 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:45:34.799089 | orchestrator |  "sdb": { 2026-01-01 00:45:34.799094 | orchestrator |  "osd_lvm_uuid": "fb51feb3-1d35-54b5-8ae7-a768510de0c7" 2026-01-01 00:45:34.799107 | orchestrator |  }, 2026-01-01 00:45:34.799112 | orchestrator |  "sdc": { 2026-01-01 00:45:34.799117 | orchestrator |  "osd_lvm_uuid": "dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef" 2026-01-01 00:45:34.799121 | orchestrator |  } 2026-01-01 00:45:34.799126 | orchestrator |  } 2026-01-01 00:45:34.799131 | orchestrator | } 2026-01-01 00:45:34.799135 | orchestrator | 2026-01-01 00:45:34.799140 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-01 00:45:34.799144 | orchestrator | Thursday 01 January 2026 00:45:31 +0000 (0:00:00.163) 0:00:28.433 ****** 2026-01-01 00:45:34.799149 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.799153 | orchestrator | 2026-01-01 00:45:34.799158 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-01 00:45:34.799162 | orchestrator | Thursday 01 January 2026 00:45:31 +0000 (0:00:00.153) 0:00:28.587 ****** 2026-01-01 00:45:34.799167 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.799171 | orchestrator | 2026-01-01 00:45:34.799176 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-01 00:45:34.799180 | orchestrator | Thursday 01 January 2026 00:45:31 +0000 (0:00:00.155) 0:00:28.743 ****** 2026-01-01 00:45:34.799185 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:45:34.799189 | orchestrator | 2026-01-01 00:45:34.799193 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-01 00:45:34.799198 | orchestrator | Thursday 01 January 2026 00:45:31 +0000 (0:00:00.143) 0:00:28.886 ****** 2026-01-01 00:45:34.799202 | orchestrator | changed: [testbed-node-4] => { 2026-01-01 00:45:34.799206 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-01 00:45:34.799211 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:45:34.799215 | orchestrator |  "sdb": { 2026-01-01 00:45:34.799220 | orchestrator |  "osd_lvm_uuid": "fb51feb3-1d35-54b5-8ae7-a768510de0c7" 2026-01-01 00:45:34.799225 | orchestrator |  }, 2026-01-01 00:45:34.799229 | orchestrator |  "sdc": { 2026-01-01 00:45:34.799233 | orchestrator |  "osd_lvm_uuid": "dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef" 2026-01-01 00:45:34.799238 | orchestrator |  } 2026-01-01 00:45:34.799243 | orchestrator |  }, 2026-01-01 00:45:34.799249 | orchestrator |  "lvm_volumes": [ 2026-01-01 00:45:34.799255 | orchestrator |  { 2026-01-01 00:45:34.799261 | orchestrator |  "data": "osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7", 2026-01-01 00:45:34.799268 | orchestrator |  "data_vg": "ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7" 2026-01-01 00:45:34.799274 | orchestrator |  }, 2026-01-01 00:45:34.799279 | orchestrator |  { 2026-01-01 00:45:34.799285 | orchestrator |  "data": "osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef", 2026-01-01 00:45:34.799291 | orchestrator |  "data_vg": "ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef" 2026-01-01 00:45:34.799296 | orchestrator |  } 2026-01-01 00:45:34.799302 | orchestrator |  ] 2026-01-01 00:45:34.799308 | orchestrator |  } 2026-01-01 00:45:34.799314 | orchestrator | } 2026-01-01 00:45:34.799320 | orchestrator | 2026-01-01 00:45:34.799326 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-01 00:45:34.799332 | orchestrator | Thursday 01 January 2026 00:45:32 +0000 (0:00:00.236) 0:00:29.123 ****** 2026-01-01 00:45:34.799338 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-01 00:45:34.799344 | orchestrator | 2026-01-01 00:45:34.799350 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-01 00:45:34.799356 | orchestrator | 2026-01-01 00:45:34.799362 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:45:34.799368 | orchestrator | Thursday 01 January 2026 00:45:33 +0000 (0:00:01.185) 0:00:30.308 ****** 2026-01-01 00:45:34.799374 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-01 00:45:34.799380 | orchestrator | 2026-01-01 00:45:34.799386 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:45:34.799407 | orchestrator | Thursday 01 January 2026 00:45:33 +0000 (0:00:00.791) 0:00:31.100 ****** 2026-01-01 00:45:34.799413 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:45:34.799420 | orchestrator | 2026-01-01 00:45:34.799425 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:34.799429 | orchestrator | Thursday 01 January 2026 00:45:34 +0000 (0:00:00.362) 0:00:31.462 ****** 2026-01-01 00:45:34.799432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-01 00:45:34.799436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-01 00:45:34.799440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-01 00:45:34.799444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-01 00:45:34.799447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-01 00:45:34.799457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-01 00:45:42.940576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-01 00:45:42.940676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-01 00:45:42.940683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-01 00:45:42.940688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-01 00:45:42.940700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-01 00:45:42.940705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-01 00:45:42.940709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-01 00:45:42.940713 | orchestrator | 2026-01-01 00:45:42.940718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940769 | orchestrator | Thursday 01 January 2026 00:45:34 +0000 (0:00:00.439) 0:00:31.902 ****** 2026-01-01 00:45:42.940775 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.940780 | orchestrator | 2026-01-01 00:45:42.940784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940789 | orchestrator | Thursday 01 January 2026 00:45:35 +0000 (0:00:00.239) 0:00:32.141 ****** 2026-01-01 00:45:42.940794 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.940798 | orchestrator | 2026-01-01 00:45:42.940802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940806 | orchestrator | Thursday 01 January 2026 00:45:35 +0000 (0:00:00.231) 0:00:32.372 ****** 2026-01-01 00:45:42.940810 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.940814 | orchestrator | 2026-01-01 00:45:42.940818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940822 | orchestrator | Thursday 01 January 2026 00:45:35 +0000 (0:00:00.209) 0:00:32.582 ****** 2026-01-01 00:45:42.940826 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.940829 | orchestrator | 2026-01-01 00:45:42.940833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940837 | orchestrator | Thursday 01 January 2026 00:45:35 +0000 (0:00:00.247) 0:00:32.829 ****** 2026-01-01 00:45:42.940841 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.940845 | orchestrator | 2026-01-01 00:45:42.940849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940853 | orchestrator | Thursday 01 January 2026 00:45:35 +0000 (0:00:00.286) 0:00:33.116 ****** 2026-01-01 00:45:42.940857 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.940861 | orchestrator | 2026-01-01 00:45:42.940865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940885 | orchestrator | Thursday 01 January 2026 00:45:36 +0000 (0:00:00.192) 0:00:33.308 ****** 2026-01-01 00:45:42.940889 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.940893 | orchestrator | 2026-01-01 00:45:42.940897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940901 | orchestrator | Thursday 01 January 2026 00:45:36 +0000 (0:00:00.202) 0:00:33.510 ****** 2026-01-01 00:45:42.940905 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.940909 | orchestrator | 2026-01-01 00:45:42.940913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940917 | orchestrator | Thursday 01 January 2026 00:45:36 +0000 (0:00:00.219) 0:00:33.730 ****** 2026-01-01 00:45:42.940921 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4) 2026-01-01 00:45:42.940926 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4) 2026-01-01 00:45:42.940930 | orchestrator | 2026-01-01 00:45:42.940934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940938 | orchestrator | Thursday 01 January 2026 00:45:37 +0000 (0:00:00.985) 0:00:34.715 ****** 2026-01-01 00:45:42.940942 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab) 2026-01-01 00:45:42.940946 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab) 2026-01-01 00:45:42.940950 | orchestrator | 2026-01-01 00:45:42.940954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940957 | orchestrator | Thursday 01 January 2026 00:45:38 +0000 (0:00:00.420) 0:00:35.136 ****** 2026-01-01 00:45:42.940961 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c) 2026-01-01 00:45:42.940965 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c) 2026-01-01 00:45:42.940969 | orchestrator | 2026-01-01 00:45:42.940973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940977 | orchestrator | Thursday 01 January 2026 00:45:38 +0000 (0:00:00.479) 0:00:35.615 ****** 2026-01-01 00:45:42.940980 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f) 2026-01-01 00:45:42.940984 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f) 2026-01-01 00:45:42.940988 | orchestrator | 2026-01-01 00:45:42.940992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:45:42.940996 | orchestrator | Thursday 01 January 2026 00:45:38 +0000 (0:00:00.500) 0:00:36.116 ****** 2026-01-01 00:45:42.941000 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:45:42.941004 | orchestrator | 2026-01-01 00:45:42.941008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941023 | orchestrator | Thursday 01 January 2026 00:45:39 +0000 (0:00:00.344) 0:00:36.460 ****** 2026-01-01 00:45:42.941027 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-01 00:45:42.941031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-01 00:45:42.941036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-01 00:45:42.941039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-01 00:45:42.941043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-01 00:45:42.941059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-01 00:45:42.941064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-01 00:45:42.941067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-01 00:45:42.941076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-01 00:45:42.941080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-01 00:45:42.941084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-01 00:45:42.941088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-01 00:45:42.941092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-01 00:45:42.941096 | orchestrator | 2026-01-01 00:45:42.941100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941104 | orchestrator | Thursday 01 January 2026 00:45:39 +0000 (0:00:00.527) 0:00:36.988 ****** 2026-01-01 00:45:42.941107 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941111 | orchestrator | 2026-01-01 00:45:42.941115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941119 | orchestrator | Thursday 01 January 2026 00:45:40 +0000 (0:00:00.168) 0:00:37.157 ****** 2026-01-01 00:45:42.941122 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941126 | orchestrator | 2026-01-01 00:45:42.941130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941136 | orchestrator | Thursday 01 January 2026 00:45:40 +0000 (0:00:00.170) 0:00:37.327 ****** 2026-01-01 00:45:42.941141 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941145 | orchestrator | 2026-01-01 00:45:42.941150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941155 | orchestrator | Thursday 01 January 2026 00:45:40 +0000 (0:00:00.171) 0:00:37.498 ****** 2026-01-01 00:45:42.941159 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941164 | orchestrator | 2026-01-01 00:45:42.941168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941172 | orchestrator | Thursday 01 January 2026 00:45:40 +0000 (0:00:00.170) 0:00:37.669 ****** 2026-01-01 00:45:42.941177 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941181 | orchestrator | 2026-01-01 00:45:42.941186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941191 | orchestrator | Thursday 01 January 2026 00:45:40 +0000 (0:00:00.173) 0:00:37.842 ****** 2026-01-01 00:45:42.941195 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941199 | orchestrator | 2026-01-01 00:45:42.941204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941208 | orchestrator | Thursday 01 January 2026 00:45:41 +0000 (0:00:00.503) 0:00:38.345 ****** 2026-01-01 00:45:42.941213 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941217 | orchestrator | 2026-01-01 00:45:42.941221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941226 | orchestrator | Thursday 01 January 2026 00:45:41 +0000 (0:00:00.174) 0:00:38.520 ****** 2026-01-01 00:45:42.941230 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941234 | orchestrator | 2026-01-01 00:45:42.941239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941243 | orchestrator | Thursday 01 January 2026 00:45:41 +0000 (0:00:00.172) 0:00:38.693 ****** 2026-01-01 00:45:42.941248 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-01 00:45:42.941253 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-01 00:45:42.941258 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-01 00:45:42.941262 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-01 00:45:42.941266 | orchestrator | 2026-01-01 00:45:42.941271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941275 | orchestrator | Thursday 01 January 2026 00:45:42 +0000 (0:00:00.609) 0:00:39.303 ****** 2026-01-01 00:45:42.941280 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941289 | orchestrator | 2026-01-01 00:45:42.941294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941298 | orchestrator | Thursday 01 January 2026 00:45:42 +0000 (0:00:00.192) 0:00:39.495 ****** 2026-01-01 00:45:42.941303 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941307 | orchestrator | 2026-01-01 00:45:42.941312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941316 | orchestrator | Thursday 01 January 2026 00:45:42 +0000 (0:00:00.180) 0:00:39.675 ****** 2026-01-01 00:45:42.941321 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941325 | orchestrator | 2026-01-01 00:45:42.941330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:45:42.941334 | orchestrator | Thursday 01 January 2026 00:45:42 +0000 (0:00:00.197) 0:00:39.873 ****** 2026-01-01 00:45:42.941339 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:42.941343 | orchestrator | 2026-01-01 00:45:42.941350 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-01 00:45:47.305394 | orchestrator | Thursday 01 January 2026 00:45:42 +0000 (0:00:00.181) 0:00:40.055 ****** 2026-01-01 00:45:47.306824 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-01 00:45:47.306861 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-01 00:45:47.306866 | orchestrator | 2026-01-01 00:45:47.306870 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-01 00:45:47.306875 | orchestrator | Thursday 01 January 2026 00:45:43 +0000 (0:00:00.206) 0:00:40.261 ****** 2026-01-01 00:45:47.306879 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.306884 | orchestrator | 2026-01-01 00:45:47.306888 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-01 00:45:47.306892 | orchestrator | Thursday 01 January 2026 00:45:43 +0000 (0:00:00.225) 0:00:40.487 ****** 2026-01-01 00:45:47.306895 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.306899 | orchestrator | 2026-01-01 00:45:47.306903 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-01 00:45:47.306907 | orchestrator | Thursday 01 January 2026 00:45:43 +0000 (0:00:00.264) 0:00:40.752 ****** 2026-01-01 00:45:47.306911 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.306914 | orchestrator | 2026-01-01 00:45:47.306918 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-01 00:45:47.306922 | orchestrator | Thursday 01 January 2026 00:45:43 +0000 (0:00:00.264) 0:00:41.016 ****** 2026-01-01 00:45:47.306926 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:45:47.306931 | orchestrator | 2026-01-01 00:45:47.306935 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-01 00:45:47.306939 | orchestrator | Thursday 01 January 2026 00:45:44 +0000 (0:00:00.112) 0:00:41.129 ****** 2026-01-01 00:45:47.308104 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a4d57ac9-6a13-5c0f-8510-006bddd85eb9'}}) 2026-01-01 00:45:47.308130 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3a96b51d-dfb4-5025-9237-a7295cf21147'}}) 2026-01-01 00:45:47.308134 | orchestrator | 2026-01-01 00:45:47.308139 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-01 00:45:47.308144 | orchestrator | Thursday 01 January 2026 00:45:44 +0000 (0:00:00.169) 0:00:41.298 ****** 2026-01-01 00:45:47.309266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a4d57ac9-6a13-5c0f-8510-006bddd85eb9'}})  2026-01-01 00:45:47.310442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3a96b51d-dfb4-5025-9237-a7295cf21147'}})  2026-01-01 00:45:47.310455 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.310460 | orchestrator | 2026-01-01 00:45:47.310465 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-01 00:45:47.310471 | orchestrator | Thursday 01 January 2026 00:45:44 +0000 (0:00:00.177) 0:00:41.476 ****** 2026-01-01 00:45:47.310503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a4d57ac9-6a13-5c0f-8510-006bddd85eb9'}})  2026-01-01 00:45:47.310508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3a96b51d-dfb4-5025-9237-a7295cf21147'}})  2026-01-01 00:45:47.310512 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.310516 | orchestrator | 2026-01-01 00:45:47.310520 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-01 00:45:47.310524 | orchestrator | Thursday 01 January 2026 00:45:44 +0000 (0:00:00.147) 0:00:41.623 ****** 2026-01-01 00:45:47.310541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a4d57ac9-6a13-5c0f-8510-006bddd85eb9'}})  2026-01-01 00:45:47.310545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3a96b51d-dfb4-5025-9237-a7295cf21147'}})  2026-01-01 00:45:47.310549 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.310553 | orchestrator | 2026-01-01 00:45:47.310557 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-01 00:45:47.310561 | orchestrator | Thursday 01 January 2026 00:45:44 +0000 (0:00:00.138) 0:00:41.762 ****** 2026-01-01 00:45:47.310565 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:45:47.310570 | orchestrator | 2026-01-01 00:45:47.310574 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-01 00:45:47.310578 | orchestrator | Thursday 01 January 2026 00:45:44 +0000 (0:00:00.111) 0:00:41.873 ****** 2026-01-01 00:45:47.310582 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:45:47.310586 | orchestrator | 2026-01-01 00:45:47.310590 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-01 00:45:47.310594 | orchestrator | Thursday 01 January 2026 00:45:44 +0000 (0:00:00.130) 0:00:42.003 ****** 2026-01-01 00:45:47.310597 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.310602 | orchestrator | 2026-01-01 00:45:47.310606 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-01 00:45:47.310610 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.126) 0:00:42.130 ****** 2026-01-01 00:45:47.310613 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.310617 | orchestrator | 2026-01-01 00:45:47.310621 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-01 00:45:47.310625 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.123) 0:00:42.254 ****** 2026-01-01 00:45:47.310629 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.310633 | orchestrator | 2026-01-01 00:45:47.310637 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-01 00:45:47.310641 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.161) 0:00:42.415 ****** 2026-01-01 00:45:47.310645 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:45:47.310649 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:45:47.310654 | orchestrator |  "sdb": { 2026-01-01 00:45:47.310682 | orchestrator |  "osd_lvm_uuid": "a4d57ac9-6a13-5c0f-8510-006bddd85eb9" 2026-01-01 00:45:47.310687 | orchestrator |  }, 2026-01-01 00:45:47.310691 | orchestrator |  "sdc": { 2026-01-01 00:45:47.310695 | orchestrator |  "osd_lvm_uuid": "3a96b51d-dfb4-5025-9237-a7295cf21147" 2026-01-01 00:45:47.310699 | orchestrator |  } 2026-01-01 00:45:47.310704 | orchestrator |  } 2026-01-01 00:45:47.310708 | orchestrator | } 2026-01-01 00:45:47.310712 | orchestrator | 2026-01-01 00:45:47.310716 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-01 00:45:47.310720 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.155) 0:00:42.571 ****** 2026-01-01 00:45:47.310724 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.310770 | orchestrator | 2026-01-01 00:45:47.310776 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-01 00:45:47.310781 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.196) 0:00:42.768 ****** 2026-01-01 00:45:47.310792 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.310797 | orchestrator | 2026-01-01 00:45:47.310801 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-01 00:45:47.310805 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.302) 0:00:43.070 ****** 2026-01-01 00:45:47.310809 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:45:47.310813 | orchestrator | 2026-01-01 00:45:47.310816 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-01 00:45:47.310820 | orchestrator | Thursday 01 January 2026 00:45:46 +0000 (0:00:00.159) 0:00:43.230 ****** 2026-01-01 00:45:47.310824 | orchestrator | changed: [testbed-node-5] => { 2026-01-01 00:45:47.310828 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-01 00:45:47.310831 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:45:47.310835 | orchestrator |  "sdb": { 2026-01-01 00:45:47.310839 | orchestrator |  "osd_lvm_uuid": "a4d57ac9-6a13-5c0f-8510-006bddd85eb9" 2026-01-01 00:45:47.310843 | orchestrator |  }, 2026-01-01 00:45:47.310847 | orchestrator |  "sdc": { 2026-01-01 00:45:47.310851 | orchestrator |  "osd_lvm_uuid": "3a96b51d-dfb4-5025-9237-a7295cf21147" 2026-01-01 00:45:47.310855 | orchestrator |  } 2026-01-01 00:45:47.310858 | orchestrator |  }, 2026-01-01 00:45:47.310862 | orchestrator |  "lvm_volumes": [ 2026-01-01 00:45:47.310866 | orchestrator |  { 2026-01-01 00:45:47.310870 | orchestrator |  "data": "osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9", 2026-01-01 00:45:47.310874 | orchestrator |  "data_vg": "ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9" 2026-01-01 00:45:47.310878 | orchestrator |  }, 2026-01-01 00:45:47.310882 | orchestrator |  { 2026-01-01 00:45:47.310886 | orchestrator |  "data": "osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147", 2026-01-01 00:45:47.310890 | orchestrator |  "data_vg": "ceph-3a96b51d-dfb4-5025-9237-a7295cf21147" 2026-01-01 00:45:47.310894 | orchestrator |  } 2026-01-01 00:45:47.310900 | orchestrator |  ] 2026-01-01 00:45:47.310904 | orchestrator |  } 2026-01-01 00:45:47.310908 | orchestrator | } 2026-01-01 00:45:47.310912 | orchestrator | 2026-01-01 00:45:47.310916 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-01 00:45:47.310919 | orchestrator | Thursday 01 January 2026 00:45:46 +0000 (0:00:00.201) 0:00:43.431 ****** 2026-01-01 00:45:47.310923 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-01 00:45:47.310927 | orchestrator | 2026-01-01 00:45:47.310931 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:45:47.310935 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-01 00:45:47.310940 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-01 00:45:47.310944 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-01 00:45:47.310947 | orchestrator | 2026-01-01 00:45:47.310951 | orchestrator | 2026-01-01 00:45:47.310955 | orchestrator | 2026-01-01 00:45:47.310959 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:45:47.310962 | orchestrator | Thursday 01 January 2026 00:45:47 +0000 (0:00:00.958) 0:00:44.390 ****** 2026-01-01 00:45:47.310966 | orchestrator | =============================================================================== 2026-01-01 00:45:47.310970 | orchestrator | Write configuration file ------------------------------------------------ 3.99s 2026-01-01 00:45:47.310974 | orchestrator | Add known partitions to the list of available block devices ------------- 1.40s 2026-01-01 00:45:47.310977 | orchestrator | Add known links to the list of available block devices ------------------ 1.36s 2026-01-01 00:45:47.310981 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.33s 2026-01-01 00:45:47.310988 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2026-01-01 00:45:47.310992 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-01-01 00:45:47.310996 | orchestrator | Get initial list of available block devices ----------------------------- 0.94s 2026-01-01 00:45:47.310999 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2026-01-01 00:45:47.311003 | orchestrator | Print configuration data ------------------------------------------------ 0.89s 2026-01-01 00:45:47.311007 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2026-01-01 00:45:47.311011 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-01-01 00:45:47.311014 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-01-01 00:45:47.311018 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.71s 2026-01-01 00:45:47.311026 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-01-01 00:45:47.562284 | orchestrator | Set DB devices config data ---------------------------------------------- 0.65s 2026-01-01 00:45:47.562370 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-01-01 00:45:47.562379 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-01-01 00:45:47.562387 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.64s 2026-01-01 00:45:47.562395 | orchestrator | Print DB devices -------------------------------------------------------- 0.62s 2026-01-01 00:45:47.562402 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-01-01 00:46:10.156987 | orchestrator | 2026-01-01 00:46:10 | INFO  | Task bcad7059-aa2e-4b95-ac29-489cb3125781 (sync inventory) is running in background. Output coming soon. 2026-01-01 00:46:38.372910 | orchestrator | 2026-01-01 00:46:11 | INFO  | Starting group_vars file reorganization 2026-01-01 00:46:38.374567 | orchestrator | 2026-01-01 00:46:11 | INFO  | Moved 0 file(s) to their respective directories 2026-01-01 00:46:38.374602 | orchestrator | 2026-01-01 00:46:11 | INFO  | Group_vars file reorganization completed 2026-01-01 00:46:38.374615 | orchestrator | 2026-01-01 00:46:14 | INFO  | Starting variable preparation from inventory 2026-01-01 00:46:38.374627 | orchestrator | 2026-01-01 00:46:16 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-01 00:46:38.374639 | orchestrator | 2026-01-01 00:46:16 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-01 00:46:38.374673 | orchestrator | 2026-01-01 00:46:16 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-01 00:46:38.374685 | orchestrator | 2026-01-01 00:46:16 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-01 00:46:38.374698 | orchestrator | 2026-01-01 00:46:16 | INFO  | Variable preparation completed 2026-01-01 00:46:38.374710 | orchestrator | 2026-01-01 00:46:18 | INFO  | Starting inventory overwrite handling 2026-01-01 00:46:38.374726 | orchestrator | 2026-01-01 00:46:18 | INFO  | Handling group overwrites in 99-overwrite 2026-01-01 00:46:38.374766 | orchestrator | 2026-01-01 00:46:18 | INFO  | Removing group frr:children from 60-generic 2026-01-01 00:46:38.374778 | orchestrator | 2026-01-01 00:46:18 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-01 00:46:38.374790 | orchestrator | 2026-01-01 00:46:18 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-01 00:46:38.374802 | orchestrator | 2026-01-01 00:46:18 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-01 00:46:38.374813 | orchestrator | 2026-01-01 00:46:18 | INFO  | Handling group overwrites in 20-roles 2026-01-01 00:46:38.374852 | orchestrator | 2026-01-01 00:46:18 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-01 00:46:38.374864 | orchestrator | 2026-01-01 00:46:18 | INFO  | Removed 5 group(s) in total 2026-01-01 00:46:38.374875 | orchestrator | 2026-01-01 00:46:18 | INFO  | Inventory overwrite handling completed 2026-01-01 00:46:38.374887 | orchestrator | 2026-01-01 00:46:19 | INFO  | Starting merge of inventory files 2026-01-01 00:46:38.374899 | orchestrator | 2026-01-01 00:46:19 | INFO  | Inventory files merged successfully 2026-01-01 00:46:38.374910 | orchestrator | 2026-01-01 00:46:25 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-01 00:46:38.374921 | orchestrator | 2026-01-01 00:46:36 | INFO  | Successfully wrote ClusterShell configuration 2026-01-01 00:46:38.374933 | orchestrator | [master 8f436ee] 2026-01-01-00-46 2026-01-01 00:46:38.374945 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-01 00:46:41.027895 | orchestrator | 2026-01-01 00:46:41 | INFO  | Task 914f06ad-9eca-470d-a2ab-22de7af37d5b (ceph-create-lvm-devices) was prepared for execution. 2026-01-01 00:46:41.028667 | orchestrator | 2026-01-01 00:46:41 | INFO  | It takes a moment until task 914f06ad-9eca-470d-a2ab-22de7af37d5b (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-01 00:46:53.435343 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-01 00:46:53.435472 | orchestrator | 2.16.14 2026-01-01 00:46:53.435491 | orchestrator | 2026-01-01 00:46:53.435504 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-01 00:46:53.435516 | orchestrator | 2026-01-01 00:46:53.435528 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:46:53.435551 | orchestrator | Thursday 01 January 2026 00:46:46 +0000 (0:00:00.322) 0:00:00.322 ****** 2026-01-01 00:46:53.435563 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-01 00:46:53.435575 | orchestrator | 2026-01-01 00:46:53.435586 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:46:53.435597 | orchestrator | Thursday 01 January 2026 00:46:46 +0000 (0:00:00.313) 0:00:00.636 ****** 2026-01-01 00:46:53.435608 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:46:53.435619 | orchestrator | 2026-01-01 00:46:53.435632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.435643 | orchestrator | Thursday 01 January 2026 00:46:46 +0000 (0:00:00.254) 0:00:00.890 ****** 2026-01-01 00:46:53.435655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-01 00:46:53.435666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-01 00:46:53.435677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-01 00:46:53.435688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-01 00:46:53.435699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-01 00:46:53.435710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-01 00:46:53.435721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-01 00:46:53.435732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-01 00:46:53.435771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-01 00:46:53.435786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-01 00:46:53.435797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-01 00:46:53.435811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-01 00:46:53.435851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-01 00:46:53.435864 | orchestrator | 2026-01-01 00:46:53.435876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.435889 | orchestrator | Thursday 01 January 2026 00:46:47 +0000 (0:00:00.467) 0:00:01.358 ****** 2026-01-01 00:46:53.435903 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.435915 | orchestrator | 2026-01-01 00:46:53.435928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.435941 | orchestrator | Thursday 01 January 2026 00:46:47 +0000 (0:00:00.272) 0:00:01.630 ****** 2026-01-01 00:46:53.435954 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.435966 | orchestrator | 2026-01-01 00:46:53.435979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.435992 | orchestrator | Thursday 01 January 2026 00:46:47 +0000 (0:00:00.170) 0:00:01.801 ****** 2026-01-01 00:46:53.436005 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.436017 | orchestrator | 2026-01-01 00:46:53.436030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.436043 | orchestrator | Thursday 01 January 2026 00:46:47 +0000 (0:00:00.242) 0:00:02.043 ****** 2026-01-01 00:46:53.436056 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.436068 | orchestrator | 2026-01-01 00:46:53.436080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.436093 | orchestrator | Thursday 01 January 2026 00:46:48 +0000 (0:00:00.190) 0:00:02.234 ****** 2026-01-01 00:46:53.436106 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.436118 | orchestrator | 2026-01-01 00:46:53.436131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.436144 | orchestrator | Thursday 01 January 2026 00:46:48 +0000 (0:00:00.208) 0:00:02.442 ****** 2026-01-01 00:46:53.436157 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.436168 | orchestrator | 2026-01-01 00:46:53.436179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.436190 | orchestrator | Thursday 01 January 2026 00:46:48 +0000 (0:00:00.258) 0:00:02.701 ****** 2026-01-01 00:46:53.436201 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.436212 | orchestrator | 2026-01-01 00:46:53.436223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.436234 | orchestrator | Thursday 01 January 2026 00:46:48 +0000 (0:00:00.217) 0:00:02.919 ****** 2026-01-01 00:46:53.436245 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.436256 | orchestrator | 2026-01-01 00:46:53.436267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.436278 | orchestrator | Thursday 01 January 2026 00:46:48 +0000 (0:00:00.198) 0:00:03.118 ****** 2026-01-01 00:46:53.436289 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8) 2026-01-01 00:46:53.436302 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8) 2026-01-01 00:46:53.436313 | orchestrator | 2026-01-01 00:46:53.436324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.436355 | orchestrator | Thursday 01 January 2026 00:46:49 +0000 (0:00:00.405) 0:00:03.523 ****** 2026-01-01 00:46:53.436367 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb) 2026-01-01 00:46:53.436378 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb) 2026-01-01 00:46:53.436389 | orchestrator | 2026-01-01 00:46:53.436401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.436412 | orchestrator | Thursday 01 January 2026 00:46:49 +0000 (0:00:00.604) 0:00:04.128 ****** 2026-01-01 00:46:53.436423 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515) 2026-01-01 00:46:53.436442 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515) 2026-01-01 00:46:53.436454 | orchestrator | 2026-01-01 00:46:53.436465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.436476 | orchestrator | Thursday 01 January 2026 00:46:50 +0000 (0:00:00.590) 0:00:04.718 ****** 2026-01-01 00:46:53.436487 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf) 2026-01-01 00:46:53.436498 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf) 2026-01-01 00:46:53.436509 | orchestrator | 2026-01-01 00:46:53.436520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:46:53.436531 | orchestrator | Thursday 01 January 2026 00:46:51 +0000 (0:00:00.737) 0:00:05.456 ****** 2026-01-01 00:46:53.436542 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:46:53.436553 | orchestrator | 2026-01-01 00:46:53.436564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:46:53.436575 | orchestrator | Thursday 01 January 2026 00:46:51 +0000 (0:00:00.430) 0:00:05.886 ****** 2026-01-01 00:46:53.436586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-01 00:46:53.436597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-01 00:46:53.436608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-01 00:46:53.436639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-01 00:46:53.436651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-01 00:46:53.436662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-01 00:46:53.436673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-01 00:46:53.436684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-01 00:46:53.436695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-01 00:46:53.436706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-01 00:46:53.436717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-01 00:46:53.436733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-01 00:46:53.436771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-01 00:46:53.436790 | orchestrator | 2026-01-01 00:46:53.436806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:46:53.436826 | orchestrator | Thursday 01 January 2026 00:46:52 +0000 (0:00:00.418) 0:00:06.305 ****** 2026-01-01 00:46:53.436845 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.436863 | orchestrator | 2026-01-01 00:46:53.436878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:46:53.436889 | orchestrator | Thursday 01 January 2026 00:46:52 +0000 (0:00:00.193) 0:00:06.498 ****** 2026-01-01 00:46:53.436901 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.436912 | orchestrator | 2026-01-01 00:46:53.436923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:46:53.436933 | orchestrator | Thursday 01 January 2026 00:46:52 +0000 (0:00:00.213) 0:00:06.712 ****** 2026-01-01 00:46:53.436944 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.436955 | orchestrator | 2026-01-01 00:46:53.436966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:46:53.436977 | orchestrator | Thursday 01 January 2026 00:46:52 +0000 (0:00:00.187) 0:00:06.900 ****** 2026-01-01 00:46:53.436988 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.437007 | orchestrator | 2026-01-01 00:46:53.437019 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:46:53.437030 | orchestrator | Thursday 01 January 2026 00:46:52 +0000 (0:00:00.209) 0:00:07.109 ****** 2026-01-01 00:46:53.437040 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.437051 | orchestrator | 2026-01-01 00:46:53.437062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:46:53.437073 | orchestrator | Thursday 01 January 2026 00:46:53 +0000 (0:00:00.180) 0:00:07.290 ****** 2026-01-01 00:46:53.437084 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.437095 | orchestrator | 2026-01-01 00:46:53.437106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:46:53.437117 | orchestrator | Thursday 01 January 2026 00:46:53 +0000 (0:00:00.190) 0:00:07.480 ****** 2026-01-01 00:46:53.437128 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:46:53.437139 | orchestrator | 2026-01-01 00:46:53.437157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:01.312906 | orchestrator | Thursday 01 January 2026 00:46:53 +0000 (0:00:00.173) 0:00:07.654 ****** 2026-01-01 00:47:01.312990 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.312999 | orchestrator | 2026-01-01 00:47:01.313004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:01.313008 | orchestrator | Thursday 01 January 2026 00:46:53 +0000 (0:00:00.190) 0:00:07.845 ****** 2026-01-01 00:47:01.313012 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-01 00:47:01.313017 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-01 00:47:01.313022 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-01 00:47:01.313026 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-01 00:47:01.313030 | orchestrator | 2026-01-01 00:47:01.313034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:01.313038 | orchestrator | Thursday 01 January 2026 00:46:54 +0000 (0:00:00.916) 0:00:08.762 ****** 2026-01-01 00:47:01.313042 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313046 | orchestrator | 2026-01-01 00:47:01.313050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:01.313054 | orchestrator | Thursday 01 January 2026 00:46:54 +0000 (0:00:00.234) 0:00:08.996 ****** 2026-01-01 00:47:01.313058 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313062 | orchestrator | 2026-01-01 00:47:01.313066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:01.313070 | orchestrator | Thursday 01 January 2026 00:46:54 +0000 (0:00:00.203) 0:00:09.200 ****** 2026-01-01 00:47:01.313074 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313078 | orchestrator | 2026-01-01 00:47:01.313082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:01.313086 | orchestrator | Thursday 01 January 2026 00:46:55 +0000 (0:00:00.215) 0:00:09.415 ****** 2026-01-01 00:47:01.313089 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313093 | orchestrator | 2026-01-01 00:47:01.313097 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-01 00:47:01.313101 | orchestrator | Thursday 01 January 2026 00:46:55 +0000 (0:00:00.221) 0:00:09.637 ****** 2026-01-01 00:47:01.313105 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313109 | orchestrator | 2026-01-01 00:47:01.313113 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-01 00:47:01.313117 | orchestrator | Thursday 01 January 2026 00:46:55 +0000 (0:00:00.170) 0:00:09.808 ****** 2026-01-01 00:47:01.313121 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd1d2d931-4931-5db0-ab2b-78163b321b95'}}) 2026-01-01 00:47:01.313126 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'}}) 2026-01-01 00:47:01.313130 | orchestrator | 2026-01-01 00:47:01.313134 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-01 00:47:01.313153 | orchestrator | Thursday 01 January 2026 00:46:55 +0000 (0:00:00.219) 0:00:10.028 ****** 2026-01-01 00:47:01.313158 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'}) 2026-01-01 00:47:01.313162 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'}) 2026-01-01 00:47:01.313166 | orchestrator | 2026-01-01 00:47:01.313170 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-01 00:47:01.313174 | orchestrator | Thursday 01 January 2026 00:46:57 +0000 (0:00:02.012) 0:00:12.040 ****** 2026-01-01 00:47:01.313178 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:01.313183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:01.313187 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313191 | orchestrator | 2026-01-01 00:47:01.313195 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-01 00:47:01.313198 | orchestrator | Thursday 01 January 2026 00:46:57 +0000 (0:00:00.157) 0:00:12.198 ****** 2026-01-01 00:47:01.313202 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'}) 2026-01-01 00:47:01.313206 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'}) 2026-01-01 00:47:01.313210 | orchestrator | 2026-01-01 00:47:01.313214 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-01 00:47:01.313218 | orchestrator | Thursday 01 January 2026 00:46:59 +0000 (0:00:01.414) 0:00:13.613 ****** 2026-01-01 00:47:01.313222 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:01.313226 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:01.313230 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313234 | orchestrator | 2026-01-01 00:47:01.313238 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-01 00:47:01.313244 | orchestrator | Thursday 01 January 2026 00:46:59 +0000 (0:00:00.152) 0:00:13.765 ****** 2026-01-01 00:47:01.313262 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313266 | orchestrator | 2026-01-01 00:47:01.313270 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-01 00:47:01.313274 | orchestrator | Thursday 01 January 2026 00:46:59 +0000 (0:00:00.123) 0:00:13.889 ****** 2026-01-01 00:47:01.313278 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:01.313282 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:01.313285 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313289 | orchestrator | 2026-01-01 00:47:01.313293 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-01 00:47:01.313297 | orchestrator | Thursday 01 January 2026 00:47:00 +0000 (0:00:00.353) 0:00:14.242 ****** 2026-01-01 00:47:01.313301 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313305 | orchestrator | 2026-01-01 00:47:01.313308 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-01 00:47:01.313312 | orchestrator | Thursday 01 January 2026 00:47:00 +0000 (0:00:00.144) 0:00:14.387 ****** 2026-01-01 00:47:01.313320 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:01.313324 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:01.313328 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313332 | orchestrator | 2026-01-01 00:47:01.313336 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-01 00:47:01.313340 | orchestrator | Thursday 01 January 2026 00:47:00 +0000 (0:00:00.185) 0:00:14.573 ****** 2026-01-01 00:47:01.313344 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313347 | orchestrator | 2026-01-01 00:47:01.313351 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-01 00:47:01.313355 | orchestrator | Thursday 01 January 2026 00:47:00 +0000 (0:00:00.116) 0:00:14.690 ****** 2026-01-01 00:47:01.313359 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:01.313363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:01.313367 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313370 | orchestrator | 2026-01-01 00:47:01.313374 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-01 00:47:01.313378 | orchestrator | Thursday 01 January 2026 00:47:00 +0000 (0:00:00.147) 0:00:14.837 ****** 2026-01-01 00:47:01.313382 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:01.313386 | orchestrator | 2026-01-01 00:47:01.313390 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-01 00:47:01.313406 | orchestrator | Thursday 01 January 2026 00:47:00 +0000 (0:00:00.116) 0:00:14.954 ****** 2026-01-01 00:47:01.313413 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:01.313417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:01.313421 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313425 | orchestrator | 2026-01-01 00:47:01.313428 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-01 00:47:01.313432 | orchestrator | Thursday 01 January 2026 00:47:00 +0000 (0:00:00.146) 0:00:15.101 ****** 2026-01-01 00:47:01.313436 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:01.313440 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:01.313444 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313448 | orchestrator | 2026-01-01 00:47:01.313451 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-01 00:47:01.313455 | orchestrator | Thursday 01 January 2026 00:47:01 +0000 (0:00:00.140) 0:00:15.242 ****** 2026-01-01 00:47:01.313459 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:01.313464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:01.313469 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313473 | orchestrator | 2026-01-01 00:47:01.313478 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-01 00:47:01.313487 | orchestrator | Thursday 01 January 2026 00:47:01 +0000 (0:00:00.156) 0:00:15.398 ****** 2026-01-01 00:47:01.313492 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:01.313496 | orchestrator | 2026-01-01 00:47:01.313501 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-01 00:47:01.313508 | orchestrator | Thursday 01 January 2026 00:47:01 +0000 (0:00:00.136) 0:00:15.534 ****** 2026-01-01 00:47:07.774377 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.774530 | orchestrator | 2026-01-01 00:47:07.774549 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-01 00:47:07.774563 | orchestrator | Thursday 01 January 2026 00:47:01 +0000 (0:00:00.105) 0:00:15.640 ****** 2026-01-01 00:47:07.774575 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.774586 | orchestrator | 2026-01-01 00:47:07.774598 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-01 00:47:07.774609 | orchestrator | Thursday 01 January 2026 00:47:01 +0000 (0:00:00.128) 0:00:15.768 ****** 2026-01-01 00:47:07.774620 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:47:07.774632 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-01 00:47:07.774644 | orchestrator | } 2026-01-01 00:47:07.774655 | orchestrator | 2026-01-01 00:47:07.774666 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-01 00:47:07.774677 | orchestrator | Thursday 01 January 2026 00:47:01 +0000 (0:00:00.286) 0:00:16.054 ****** 2026-01-01 00:47:07.774689 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:47:07.774700 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-01 00:47:07.774711 | orchestrator | } 2026-01-01 00:47:07.774722 | orchestrator | 2026-01-01 00:47:07.774733 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-01 00:47:07.774905 | orchestrator | Thursday 01 January 2026 00:47:01 +0000 (0:00:00.150) 0:00:16.205 ****** 2026-01-01 00:47:07.774934 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:47:07.774946 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-01 00:47:07.774958 | orchestrator | } 2026-01-01 00:47:07.774969 | orchestrator | 2026-01-01 00:47:07.774980 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-01 00:47:07.774992 | orchestrator | Thursday 01 January 2026 00:47:02 +0000 (0:00:00.145) 0:00:16.350 ****** 2026-01-01 00:47:07.775003 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:07.775014 | orchestrator | 2026-01-01 00:47:07.775025 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-01 00:47:07.775036 | orchestrator | Thursday 01 January 2026 00:47:02 +0000 (0:00:00.669) 0:00:17.020 ****** 2026-01-01 00:47:07.775047 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:07.775058 | orchestrator | 2026-01-01 00:47:07.775069 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-01 00:47:07.775080 | orchestrator | Thursday 01 January 2026 00:47:03 +0000 (0:00:00.545) 0:00:17.565 ****** 2026-01-01 00:47:07.775091 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:07.775102 | orchestrator | 2026-01-01 00:47:07.775113 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-01 00:47:07.775124 | orchestrator | Thursday 01 January 2026 00:47:03 +0000 (0:00:00.570) 0:00:18.136 ****** 2026-01-01 00:47:07.775135 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:07.775147 | orchestrator | 2026-01-01 00:47:07.775157 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-01 00:47:07.775169 | orchestrator | Thursday 01 January 2026 00:47:04 +0000 (0:00:00.167) 0:00:18.303 ****** 2026-01-01 00:47:07.775180 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775191 | orchestrator | 2026-01-01 00:47:07.775202 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-01 00:47:07.775213 | orchestrator | Thursday 01 January 2026 00:47:04 +0000 (0:00:00.124) 0:00:18.428 ****** 2026-01-01 00:47:07.775224 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775235 | orchestrator | 2026-01-01 00:47:07.775246 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-01 00:47:07.775299 | orchestrator | Thursday 01 January 2026 00:47:04 +0000 (0:00:00.140) 0:00:18.569 ****** 2026-01-01 00:47:07.775311 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:47:07.775323 | orchestrator |  "vgs_report": { 2026-01-01 00:47:07.775334 | orchestrator |  "vg": [] 2026-01-01 00:47:07.775345 | orchestrator |  } 2026-01-01 00:47:07.775356 | orchestrator | } 2026-01-01 00:47:07.775367 | orchestrator | 2026-01-01 00:47:07.775378 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-01 00:47:07.775389 | orchestrator | Thursday 01 January 2026 00:47:04 +0000 (0:00:00.148) 0:00:18.717 ****** 2026-01-01 00:47:07.775400 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775410 | orchestrator | 2026-01-01 00:47:07.775421 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-01 00:47:07.775433 | orchestrator | Thursday 01 January 2026 00:47:04 +0000 (0:00:00.133) 0:00:18.850 ****** 2026-01-01 00:47:07.775443 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775454 | orchestrator | 2026-01-01 00:47:07.775465 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-01 00:47:07.775476 | orchestrator | Thursday 01 January 2026 00:47:04 +0000 (0:00:00.123) 0:00:18.974 ****** 2026-01-01 00:47:07.775487 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775498 | orchestrator | 2026-01-01 00:47:07.775508 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-01 00:47:07.775519 | orchestrator | Thursday 01 January 2026 00:47:05 +0000 (0:00:00.367) 0:00:19.342 ****** 2026-01-01 00:47:07.775530 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775541 | orchestrator | 2026-01-01 00:47:07.775553 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-01 00:47:07.775563 | orchestrator | Thursday 01 January 2026 00:47:05 +0000 (0:00:00.161) 0:00:19.503 ****** 2026-01-01 00:47:07.775574 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775586 | orchestrator | 2026-01-01 00:47:07.775596 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-01 00:47:07.775607 | orchestrator | Thursday 01 January 2026 00:47:05 +0000 (0:00:00.156) 0:00:19.659 ****** 2026-01-01 00:47:07.775618 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775629 | orchestrator | 2026-01-01 00:47:07.775640 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-01 00:47:07.775651 | orchestrator | Thursday 01 January 2026 00:47:05 +0000 (0:00:00.133) 0:00:19.793 ****** 2026-01-01 00:47:07.775662 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775673 | orchestrator | 2026-01-01 00:47:07.775684 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-01 00:47:07.775695 | orchestrator | Thursday 01 January 2026 00:47:05 +0000 (0:00:00.146) 0:00:19.939 ****** 2026-01-01 00:47:07.775727 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775739 | orchestrator | 2026-01-01 00:47:07.775790 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-01 00:47:07.775802 | orchestrator | Thursday 01 January 2026 00:47:05 +0000 (0:00:00.180) 0:00:20.120 ****** 2026-01-01 00:47:07.775813 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775824 | orchestrator | 2026-01-01 00:47:07.775835 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-01 00:47:07.775846 | orchestrator | Thursday 01 January 2026 00:47:06 +0000 (0:00:00.133) 0:00:20.253 ****** 2026-01-01 00:47:07.775857 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775868 | orchestrator | 2026-01-01 00:47:07.775879 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-01 00:47:07.775889 | orchestrator | Thursday 01 January 2026 00:47:06 +0000 (0:00:00.152) 0:00:20.406 ****** 2026-01-01 00:47:07.775900 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775911 | orchestrator | 2026-01-01 00:47:07.775922 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-01 00:47:07.775933 | orchestrator | Thursday 01 January 2026 00:47:06 +0000 (0:00:00.111) 0:00:20.517 ****** 2026-01-01 00:47:07.775959 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.775970 | orchestrator | 2026-01-01 00:47:07.775981 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-01 00:47:07.775992 | orchestrator | Thursday 01 January 2026 00:47:06 +0000 (0:00:00.102) 0:00:20.620 ****** 2026-01-01 00:47:07.776003 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.776014 | orchestrator | 2026-01-01 00:47:07.776025 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-01 00:47:07.776036 | orchestrator | Thursday 01 January 2026 00:47:06 +0000 (0:00:00.110) 0:00:20.731 ****** 2026-01-01 00:47:07.776047 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.776058 | orchestrator | 2026-01-01 00:47:07.776069 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-01 00:47:07.776080 | orchestrator | Thursday 01 January 2026 00:47:06 +0000 (0:00:00.137) 0:00:20.869 ****** 2026-01-01 00:47:07.776092 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:07.776105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:07.776116 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.776127 | orchestrator | 2026-01-01 00:47:07.776138 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-01 00:47:07.776149 | orchestrator | Thursday 01 January 2026 00:47:06 +0000 (0:00:00.273) 0:00:21.142 ****** 2026-01-01 00:47:07.776160 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:07.776171 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:07.776182 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.776193 | orchestrator | 2026-01-01 00:47:07.776204 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-01 00:47:07.776215 | orchestrator | Thursday 01 January 2026 00:47:07 +0000 (0:00:00.139) 0:00:21.281 ****** 2026-01-01 00:47:07.776226 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:07.776238 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:07.776249 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.776260 | orchestrator | 2026-01-01 00:47:07.776271 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-01 00:47:07.776281 | orchestrator | Thursday 01 January 2026 00:47:07 +0000 (0:00:00.156) 0:00:21.438 ****** 2026-01-01 00:47:07.776292 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:07.776304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:07.776315 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.776325 | orchestrator | 2026-01-01 00:47:07.776336 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-01 00:47:07.776347 | orchestrator | Thursday 01 January 2026 00:47:07 +0000 (0:00:00.181) 0:00:21.619 ****** 2026-01-01 00:47:07.776358 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:07.776369 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:07.776387 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:07.776398 | orchestrator | 2026-01-01 00:47:07.776409 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-01 00:47:07.776428 | orchestrator | Thursday 01 January 2026 00:47:07 +0000 (0:00:00.196) 0:00:21.815 ****** 2026-01-01 00:47:07.776447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:12.995814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:12.995934 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:12.995951 | orchestrator | 2026-01-01 00:47:12.995964 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-01 00:47:12.995977 | orchestrator | Thursday 01 January 2026 00:47:07 +0000 (0:00:00.181) 0:00:21.996 ****** 2026-01-01 00:47:12.995989 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:12.996000 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:12.996011 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:12.996022 | orchestrator | 2026-01-01 00:47:12.996033 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-01 00:47:12.996045 | orchestrator | Thursday 01 January 2026 00:47:07 +0000 (0:00:00.161) 0:00:22.158 ****** 2026-01-01 00:47:12.996056 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:12.996067 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:12.996078 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:12.996089 | orchestrator | 2026-01-01 00:47:12.996100 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-01 00:47:12.996111 | orchestrator | Thursday 01 January 2026 00:47:08 +0000 (0:00:00.162) 0:00:22.320 ****** 2026-01-01 00:47:12.996122 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:12.996134 | orchestrator | 2026-01-01 00:47:12.996145 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-01 00:47:12.996156 | orchestrator | Thursday 01 January 2026 00:47:08 +0000 (0:00:00.561) 0:00:22.881 ****** 2026-01-01 00:47:12.996167 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:12.996178 | orchestrator | 2026-01-01 00:47:12.996189 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-01 00:47:12.996200 | orchestrator | Thursday 01 January 2026 00:47:09 +0000 (0:00:00.513) 0:00:23.395 ****** 2026-01-01 00:47:12.996211 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:12.996222 | orchestrator | 2026-01-01 00:47:12.996233 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-01 00:47:12.996244 | orchestrator | Thursday 01 January 2026 00:47:09 +0000 (0:00:00.161) 0:00:23.556 ****** 2026-01-01 00:47:12.996255 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'vg_name': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'}) 2026-01-01 00:47:12.996284 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'vg_name': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'}) 2026-01-01 00:47:12.996297 | orchestrator | 2026-01-01 00:47:12.996310 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-01 00:47:12.996322 | orchestrator | Thursday 01 January 2026 00:47:09 +0000 (0:00:00.197) 0:00:23.753 ****** 2026-01-01 00:47:12.996357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:12.996370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:12.996383 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:12.996397 | orchestrator | 2026-01-01 00:47:12.996410 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-01 00:47:12.996423 | orchestrator | Thursday 01 January 2026 00:47:09 +0000 (0:00:00.341) 0:00:24.095 ****** 2026-01-01 00:47:12.996435 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:12.996448 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:12.996460 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:12.996475 | orchestrator | 2026-01-01 00:47:12.996487 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-01 00:47:12.996499 | orchestrator | Thursday 01 January 2026 00:47:10 +0000 (0:00:00.183) 0:00:24.278 ****** 2026-01-01 00:47:12.996512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'})  2026-01-01 00:47:12.996525 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'})  2026-01-01 00:47:12.996537 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:47:12.996548 | orchestrator | 2026-01-01 00:47:12.996559 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-01 00:47:12.996570 | orchestrator | Thursday 01 January 2026 00:47:10 +0000 (0:00:00.156) 0:00:24.435 ****** 2026-01-01 00:47:12.996597 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:47:12.996609 | orchestrator |  "lvm_report": { 2026-01-01 00:47:12.996620 | orchestrator |  "lv": [ 2026-01-01 00:47:12.996631 | orchestrator |  { 2026-01-01 00:47:12.996642 | orchestrator |  "lv_name": "osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf", 2026-01-01 00:47:12.996654 | orchestrator |  "vg_name": "ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf" 2026-01-01 00:47:12.996665 | orchestrator |  }, 2026-01-01 00:47:12.996676 | orchestrator |  { 2026-01-01 00:47:12.996687 | orchestrator |  "lv_name": "osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95", 2026-01-01 00:47:12.996698 | orchestrator |  "vg_name": "ceph-d1d2d931-4931-5db0-ab2b-78163b321b95" 2026-01-01 00:47:12.996709 | orchestrator |  } 2026-01-01 00:47:12.996720 | orchestrator |  ], 2026-01-01 00:47:12.996731 | orchestrator |  "pv": [ 2026-01-01 00:47:12.996762 | orchestrator |  { 2026-01-01 00:47:12.996774 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-01 00:47:12.996786 | orchestrator |  "vg_name": "ceph-d1d2d931-4931-5db0-ab2b-78163b321b95" 2026-01-01 00:47:12.996796 | orchestrator |  }, 2026-01-01 00:47:12.996807 | orchestrator |  { 2026-01-01 00:47:12.996818 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-01 00:47:12.996829 | orchestrator |  "vg_name": "ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf" 2026-01-01 00:47:12.996840 | orchestrator |  } 2026-01-01 00:47:12.996851 | orchestrator |  ] 2026-01-01 00:47:12.996862 | orchestrator |  } 2026-01-01 00:47:12.996873 | orchestrator | } 2026-01-01 00:47:12.996884 | orchestrator | 2026-01-01 00:47:12.996895 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-01 00:47:12.996906 | orchestrator | 2026-01-01 00:47:12.996917 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:47:12.996992 | orchestrator | Thursday 01 January 2026 00:47:10 +0000 (0:00:00.328) 0:00:24.764 ****** 2026-01-01 00:47:12.997004 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-01 00:47:12.997016 | orchestrator | 2026-01-01 00:47:12.997027 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:47:12.997038 | orchestrator | Thursday 01 January 2026 00:47:10 +0000 (0:00:00.269) 0:00:25.034 ****** 2026-01-01 00:47:12.997049 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:12.997060 | orchestrator | 2026-01-01 00:47:12.997070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:12.997081 | orchestrator | Thursday 01 January 2026 00:47:11 +0000 (0:00:00.210) 0:00:25.244 ****** 2026-01-01 00:47:12.997092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-01 00:47:12.997103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-01 00:47:12.997114 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-01 00:47:12.997125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-01 00:47:12.997137 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-01 00:47:12.997147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-01 00:47:12.997164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-01 00:47:12.997175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-01 00:47:12.997186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-01 00:47:12.997197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-01 00:47:12.997208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-01 00:47:12.997219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-01 00:47:12.997230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-01 00:47:12.997241 | orchestrator | 2026-01-01 00:47:12.997252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:12.997263 | orchestrator | Thursday 01 January 2026 00:47:11 +0000 (0:00:00.457) 0:00:25.702 ****** 2026-01-01 00:47:12.997274 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:12.997285 | orchestrator | 2026-01-01 00:47:12.997296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:12.997307 | orchestrator | Thursday 01 January 2026 00:47:11 +0000 (0:00:00.212) 0:00:25.915 ****** 2026-01-01 00:47:12.997318 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:12.997328 | orchestrator | 2026-01-01 00:47:12.997339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:12.997350 | orchestrator | Thursday 01 January 2026 00:47:11 +0000 (0:00:00.178) 0:00:26.094 ****** 2026-01-01 00:47:12.997361 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:12.997372 | orchestrator | 2026-01-01 00:47:12.997383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:12.997394 | orchestrator | Thursday 01 January 2026 00:47:12 +0000 (0:00:00.499) 0:00:26.593 ****** 2026-01-01 00:47:12.997405 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:12.997416 | orchestrator | 2026-01-01 00:47:12.997427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:12.997438 | orchestrator | Thursday 01 January 2026 00:47:12 +0000 (0:00:00.240) 0:00:26.833 ****** 2026-01-01 00:47:12.997449 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:12.997460 | orchestrator | 2026-01-01 00:47:12.997471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:12.997488 | orchestrator | Thursday 01 January 2026 00:47:12 +0000 (0:00:00.194) 0:00:27.027 ****** 2026-01-01 00:47:12.997499 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:12.997510 | orchestrator | 2026-01-01 00:47:12.997530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:25.016816 | orchestrator | Thursday 01 January 2026 00:47:12 +0000 (0:00:00.187) 0:00:27.215 ****** 2026-01-01 00:47:25.016962 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.016994 | orchestrator | 2026-01-01 00:47:25.017016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:25.017036 | orchestrator | Thursday 01 January 2026 00:47:13 +0000 (0:00:00.194) 0:00:27.409 ****** 2026-01-01 00:47:25.017056 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.017076 | orchestrator | 2026-01-01 00:47:25.017089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:25.017100 | orchestrator | Thursday 01 January 2026 00:47:13 +0000 (0:00:00.210) 0:00:27.620 ****** 2026-01-01 00:47:25.017112 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c) 2026-01-01 00:47:25.017124 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c) 2026-01-01 00:47:25.017135 | orchestrator | 2026-01-01 00:47:25.017146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:25.017158 | orchestrator | Thursday 01 January 2026 00:47:13 +0000 (0:00:00.392) 0:00:28.012 ****** 2026-01-01 00:47:25.017169 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122) 2026-01-01 00:47:25.017180 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122) 2026-01-01 00:47:25.017191 | orchestrator | 2026-01-01 00:47:25.017202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:25.017213 | orchestrator | Thursday 01 January 2026 00:47:14 +0000 (0:00:00.494) 0:00:28.506 ****** 2026-01-01 00:47:25.017224 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86) 2026-01-01 00:47:25.017235 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86) 2026-01-01 00:47:25.017254 | orchestrator | 2026-01-01 00:47:25.017273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:25.017294 | orchestrator | Thursday 01 January 2026 00:47:14 +0000 (0:00:00.433) 0:00:28.940 ****** 2026-01-01 00:47:25.017315 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3) 2026-01-01 00:47:25.017334 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3) 2026-01-01 00:47:25.017350 | orchestrator | 2026-01-01 00:47:25.017364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:25.017381 | orchestrator | Thursday 01 January 2026 00:47:15 +0000 (0:00:00.761) 0:00:29.702 ****** 2026-01-01 00:47:25.017400 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:47:25.017419 | orchestrator | 2026-01-01 00:47:25.017438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.017457 | orchestrator | Thursday 01 January 2026 00:47:15 +0000 (0:00:00.503) 0:00:30.206 ****** 2026-01-01 00:47:25.017479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-01 00:47:25.017501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-01 00:47:25.017521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-01 00:47:25.017543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-01 00:47:25.017564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-01 00:47:25.017645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-01 00:47:25.017667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-01 00:47:25.017679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-01 00:47:25.017690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-01 00:47:25.017701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-01 00:47:25.017712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-01 00:47:25.017723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-01 00:47:25.017734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-01 00:47:25.017774 | orchestrator | 2026-01-01 00:47:25.017787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.017797 | orchestrator | Thursday 01 January 2026 00:47:16 +0000 (0:00:00.569) 0:00:30.775 ****** 2026-01-01 00:47:25.017808 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.017819 | orchestrator | 2026-01-01 00:47:25.017831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.017842 | orchestrator | Thursday 01 January 2026 00:47:16 +0000 (0:00:00.245) 0:00:31.021 ****** 2026-01-01 00:47:25.017853 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.017864 | orchestrator | 2026-01-01 00:47:25.017875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.017885 | orchestrator | Thursday 01 January 2026 00:47:16 +0000 (0:00:00.200) 0:00:31.222 ****** 2026-01-01 00:47:25.017896 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.017907 | orchestrator | 2026-01-01 00:47:25.017939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.017951 | orchestrator | Thursday 01 January 2026 00:47:17 +0000 (0:00:00.206) 0:00:31.428 ****** 2026-01-01 00:47:25.017961 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.017972 | orchestrator | 2026-01-01 00:47:25.017983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.017994 | orchestrator | Thursday 01 January 2026 00:47:17 +0000 (0:00:00.213) 0:00:31.641 ****** 2026-01-01 00:47:25.018004 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.018075 | orchestrator | 2026-01-01 00:47:25.018088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.018099 | orchestrator | Thursday 01 January 2026 00:47:17 +0000 (0:00:00.221) 0:00:31.862 ****** 2026-01-01 00:47:25.018110 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.018120 | orchestrator | 2026-01-01 00:47:25.018131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.018142 | orchestrator | Thursday 01 January 2026 00:47:17 +0000 (0:00:00.234) 0:00:32.097 ****** 2026-01-01 00:47:25.018153 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.018164 | orchestrator | 2026-01-01 00:47:25.018174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.018185 | orchestrator | Thursday 01 January 2026 00:47:18 +0000 (0:00:00.243) 0:00:32.340 ****** 2026-01-01 00:47:25.018196 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.018207 | orchestrator | 2026-01-01 00:47:25.018218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.018229 | orchestrator | Thursday 01 January 2026 00:47:18 +0000 (0:00:00.233) 0:00:32.574 ****** 2026-01-01 00:47:25.018240 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-01 00:47:25.018251 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-01 00:47:25.018264 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-01 00:47:25.018275 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-01 00:47:25.018297 | orchestrator | 2026-01-01 00:47:25.018308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.018319 | orchestrator | Thursday 01 January 2026 00:47:19 +0000 (0:00:01.111) 0:00:33.685 ****** 2026-01-01 00:47:25.018329 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.018340 | orchestrator | 2026-01-01 00:47:25.018351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.018362 | orchestrator | Thursday 01 January 2026 00:47:19 +0000 (0:00:00.238) 0:00:33.924 ****** 2026-01-01 00:47:25.018373 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.018392 | orchestrator | 2026-01-01 00:47:25.018412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.018430 | orchestrator | Thursday 01 January 2026 00:47:20 +0000 (0:00:00.891) 0:00:34.815 ****** 2026-01-01 00:47:25.018441 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.018452 | orchestrator | 2026-01-01 00:47:25.018463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:25.018474 | orchestrator | Thursday 01 January 2026 00:47:20 +0000 (0:00:00.269) 0:00:35.085 ****** 2026-01-01 00:47:25.018485 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.018496 | orchestrator | 2026-01-01 00:47:25.018506 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-01 00:47:25.018524 | orchestrator | Thursday 01 January 2026 00:47:21 +0000 (0:00:00.238) 0:00:35.324 ****** 2026-01-01 00:47:25.018535 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.018546 | orchestrator | 2026-01-01 00:47:25.018556 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-01 00:47:25.018567 | orchestrator | Thursday 01 January 2026 00:47:21 +0000 (0:00:00.185) 0:00:35.509 ****** 2026-01-01 00:47:25.018578 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fb51feb3-1d35-54b5-8ae7-a768510de0c7'}}) 2026-01-01 00:47:25.018589 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'}}) 2026-01-01 00:47:25.018600 | orchestrator | 2026-01-01 00:47:25.018618 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-01 00:47:25.018638 | orchestrator | Thursday 01 January 2026 00:47:21 +0000 (0:00:00.232) 0:00:35.742 ****** 2026-01-01 00:47:25.018652 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'}) 2026-01-01 00:47:25.018664 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'}) 2026-01-01 00:47:25.018675 | orchestrator | 2026-01-01 00:47:25.018686 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-01 00:47:25.018696 | orchestrator | Thursday 01 January 2026 00:47:23 +0000 (0:00:01.980) 0:00:37.722 ****** 2026-01-01 00:47:25.018707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:25.018720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:25.018731 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:25.018762 | orchestrator | 2026-01-01 00:47:25.018775 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-01 00:47:25.018786 | orchestrator | Thursday 01 January 2026 00:47:23 +0000 (0:00:00.179) 0:00:37.901 ****** 2026-01-01 00:47:25.018797 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'}) 2026-01-01 00:47:25.018817 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'}) 2026-01-01 00:47:31.553923 | orchestrator | 2026-01-01 00:47:31.554113 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-01 00:47:31.554138 | orchestrator | Thursday 01 January 2026 00:47:25 +0000 (0:00:01.330) 0:00:39.231 ****** 2026-01-01 00:47:31.554151 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:31.554164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:31.554176 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554188 | orchestrator | 2026-01-01 00:47:31.554200 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-01 00:47:31.554211 | orchestrator | Thursday 01 January 2026 00:47:25 +0000 (0:00:00.218) 0:00:39.449 ****** 2026-01-01 00:47:31.554222 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554233 | orchestrator | 2026-01-01 00:47:31.554244 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-01 00:47:31.554255 | orchestrator | Thursday 01 January 2026 00:47:25 +0000 (0:00:00.150) 0:00:39.600 ****** 2026-01-01 00:47:31.554266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:31.554278 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:31.554289 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554299 | orchestrator | 2026-01-01 00:47:31.554310 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-01 00:47:31.554321 | orchestrator | Thursday 01 January 2026 00:47:25 +0000 (0:00:00.172) 0:00:39.772 ****** 2026-01-01 00:47:31.554332 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554343 | orchestrator | 2026-01-01 00:47:31.554354 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-01 00:47:31.554365 | orchestrator | Thursday 01 January 2026 00:47:25 +0000 (0:00:00.147) 0:00:39.920 ****** 2026-01-01 00:47:31.554376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:31.554387 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:31.554398 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554409 | orchestrator | 2026-01-01 00:47:31.554420 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-01 00:47:31.554448 | orchestrator | Thursday 01 January 2026 00:47:26 +0000 (0:00:00.428) 0:00:40.348 ****** 2026-01-01 00:47:31.554460 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554471 | orchestrator | 2026-01-01 00:47:31.554482 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-01 00:47:31.554492 | orchestrator | Thursday 01 January 2026 00:47:26 +0000 (0:00:00.172) 0:00:40.520 ****** 2026-01-01 00:47:31.554503 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:31.554514 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:31.554526 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554537 | orchestrator | 2026-01-01 00:47:31.554547 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-01 00:47:31.554558 | orchestrator | Thursday 01 January 2026 00:47:26 +0000 (0:00:00.185) 0:00:40.706 ****** 2026-01-01 00:47:31.554569 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:31.554601 | orchestrator | 2026-01-01 00:47:31.554613 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-01 00:47:31.554624 | orchestrator | Thursday 01 January 2026 00:47:26 +0000 (0:00:00.141) 0:00:40.847 ****** 2026-01-01 00:47:31.554635 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:31.554646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:31.554658 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554668 | orchestrator | 2026-01-01 00:47:31.554679 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-01 00:47:31.554690 | orchestrator | Thursday 01 January 2026 00:47:26 +0000 (0:00:00.193) 0:00:41.041 ****** 2026-01-01 00:47:31.554701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:31.554712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:31.554723 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554734 | orchestrator | 2026-01-01 00:47:31.554771 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-01 00:47:31.554808 | orchestrator | Thursday 01 January 2026 00:47:27 +0000 (0:00:00.194) 0:00:41.236 ****** 2026-01-01 00:47:31.554820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:31.554831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:31.554842 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554853 | orchestrator | 2026-01-01 00:47:31.554864 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-01 00:47:31.554875 | orchestrator | Thursday 01 January 2026 00:47:27 +0000 (0:00:00.204) 0:00:41.441 ****** 2026-01-01 00:47:31.554886 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554897 | orchestrator | 2026-01-01 00:47:31.554907 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-01 00:47:31.554918 | orchestrator | Thursday 01 January 2026 00:47:27 +0000 (0:00:00.183) 0:00:41.625 ****** 2026-01-01 00:47:31.554929 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554940 | orchestrator | 2026-01-01 00:47:31.554951 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-01 00:47:31.554961 | orchestrator | Thursday 01 January 2026 00:47:27 +0000 (0:00:00.170) 0:00:41.795 ****** 2026-01-01 00:47:31.554972 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.554983 | orchestrator | 2026-01-01 00:47:31.554994 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-01 00:47:31.555004 | orchestrator | Thursday 01 January 2026 00:47:27 +0000 (0:00:00.168) 0:00:41.963 ****** 2026-01-01 00:47:31.555015 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:47:31.555026 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-01 00:47:31.555037 | orchestrator | } 2026-01-01 00:47:31.555048 | orchestrator | 2026-01-01 00:47:31.555059 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-01 00:47:31.555070 | orchestrator | Thursday 01 January 2026 00:47:27 +0000 (0:00:00.149) 0:00:42.113 ****** 2026-01-01 00:47:31.555080 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:47:31.555091 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-01 00:47:31.555102 | orchestrator | } 2026-01-01 00:47:31.555113 | orchestrator | 2026-01-01 00:47:31.555124 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-01 00:47:31.555134 | orchestrator | Thursday 01 January 2026 00:47:28 +0000 (0:00:00.166) 0:00:42.279 ****** 2026-01-01 00:47:31.555155 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:47:31.555167 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-01 00:47:31.555178 | orchestrator | } 2026-01-01 00:47:31.555189 | orchestrator | 2026-01-01 00:47:31.555200 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-01 00:47:31.555211 | orchestrator | Thursday 01 January 2026 00:47:28 +0000 (0:00:00.519) 0:00:42.798 ****** 2026-01-01 00:47:31.555222 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:31.555232 | orchestrator | 2026-01-01 00:47:31.555243 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-01 00:47:31.555254 | orchestrator | Thursday 01 January 2026 00:47:29 +0000 (0:00:00.606) 0:00:43.405 ****** 2026-01-01 00:47:31.555265 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:31.555276 | orchestrator | 2026-01-01 00:47:31.555287 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-01 00:47:31.555298 | orchestrator | Thursday 01 January 2026 00:47:29 +0000 (0:00:00.535) 0:00:43.940 ****** 2026-01-01 00:47:31.555309 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:31.555320 | orchestrator | 2026-01-01 00:47:31.555331 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-01 00:47:31.555342 | orchestrator | Thursday 01 January 2026 00:47:30 +0000 (0:00:00.538) 0:00:44.479 ****** 2026-01-01 00:47:31.555352 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:31.555363 | orchestrator | 2026-01-01 00:47:31.555374 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-01 00:47:31.555385 | orchestrator | Thursday 01 January 2026 00:47:30 +0000 (0:00:00.168) 0:00:44.647 ****** 2026-01-01 00:47:31.555396 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.555407 | orchestrator | 2026-01-01 00:47:31.555426 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-01 00:47:31.555437 | orchestrator | Thursday 01 January 2026 00:47:30 +0000 (0:00:00.122) 0:00:44.770 ****** 2026-01-01 00:47:31.555448 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.555459 | orchestrator | 2026-01-01 00:47:31.555470 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-01 00:47:31.555480 | orchestrator | Thursday 01 January 2026 00:47:30 +0000 (0:00:00.128) 0:00:44.898 ****** 2026-01-01 00:47:31.555491 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:47:31.555502 | orchestrator |  "vgs_report": { 2026-01-01 00:47:31.555513 | orchestrator |  "vg": [] 2026-01-01 00:47:31.555524 | orchestrator |  } 2026-01-01 00:47:31.555535 | orchestrator | } 2026-01-01 00:47:31.555546 | orchestrator | 2026-01-01 00:47:31.555557 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-01 00:47:31.555568 | orchestrator | Thursday 01 January 2026 00:47:30 +0000 (0:00:00.169) 0:00:45.067 ****** 2026-01-01 00:47:31.555578 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.555589 | orchestrator | 2026-01-01 00:47:31.555600 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-01 00:47:31.555611 | orchestrator | Thursday 01 January 2026 00:47:31 +0000 (0:00:00.162) 0:00:45.229 ****** 2026-01-01 00:47:31.555622 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.555632 | orchestrator | 2026-01-01 00:47:31.555643 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-01 00:47:31.555654 | orchestrator | Thursday 01 January 2026 00:47:31 +0000 (0:00:00.178) 0:00:45.408 ****** 2026-01-01 00:47:31.555665 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.555676 | orchestrator | 2026-01-01 00:47:31.555686 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-01 00:47:31.555697 | orchestrator | Thursday 01 January 2026 00:47:31 +0000 (0:00:00.200) 0:00:45.608 ****** 2026-01-01 00:47:31.555708 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:31.555719 | orchestrator | 2026-01-01 00:47:31.555737 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-01 00:47:37.251478 | orchestrator | Thursday 01 January 2026 00:47:31 +0000 (0:00:00.162) 0:00:45.771 ****** 2026-01-01 00:47:37.251619 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.251636 | orchestrator | 2026-01-01 00:47:37.251649 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-01 00:47:37.251661 | orchestrator | Thursday 01 January 2026 00:47:31 +0000 (0:00:00.403) 0:00:46.174 ****** 2026-01-01 00:47:37.251672 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.251683 | orchestrator | 2026-01-01 00:47:37.251694 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-01 00:47:37.251705 | orchestrator | Thursday 01 January 2026 00:47:32 +0000 (0:00:00.170) 0:00:46.344 ****** 2026-01-01 00:47:37.251716 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.251727 | orchestrator | 2026-01-01 00:47:37.251738 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-01 00:47:37.251807 | orchestrator | Thursday 01 January 2026 00:47:32 +0000 (0:00:00.158) 0:00:46.503 ****** 2026-01-01 00:47:37.251819 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.251830 | orchestrator | 2026-01-01 00:47:37.251841 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-01 00:47:37.251852 | orchestrator | Thursday 01 January 2026 00:47:32 +0000 (0:00:00.187) 0:00:46.690 ****** 2026-01-01 00:47:37.251862 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.251873 | orchestrator | 2026-01-01 00:47:37.251884 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-01 00:47:37.251895 | orchestrator | Thursday 01 January 2026 00:47:32 +0000 (0:00:00.144) 0:00:46.835 ****** 2026-01-01 00:47:37.251906 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.251916 | orchestrator | 2026-01-01 00:47:37.251927 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-01 00:47:37.251938 | orchestrator | Thursday 01 January 2026 00:47:32 +0000 (0:00:00.163) 0:00:46.999 ****** 2026-01-01 00:47:37.251949 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.251959 | orchestrator | 2026-01-01 00:47:37.251970 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-01 00:47:37.251985 | orchestrator | Thursday 01 January 2026 00:47:32 +0000 (0:00:00.146) 0:00:47.145 ****** 2026-01-01 00:47:37.252016 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.252030 | orchestrator | 2026-01-01 00:47:37.252042 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-01 00:47:37.252055 | orchestrator | Thursday 01 January 2026 00:47:33 +0000 (0:00:00.132) 0:00:47.278 ****** 2026-01-01 00:47:37.252068 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.252080 | orchestrator | 2026-01-01 00:47:37.252093 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-01 00:47:37.252105 | orchestrator | Thursday 01 January 2026 00:47:33 +0000 (0:00:00.234) 0:00:47.512 ****** 2026-01-01 00:47:37.252117 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.252131 | orchestrator | 2026-01-01 00:47:37.252144 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-01 00:47:37.252174 | orchestrator | Thursday 01 January 2026 00:47:33 +0000 (0:00:00.184) 0:00:47.697 ****** 2026-01-01 00:47:37.252188 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:37.252202 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:37.252216 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.252228 | orchestrator | 2026-01-01 00:47:37.252241 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-01 00:47:37.252254 | orchestrator | Thursday 01 January 2026 00:47:33 +0000 (0:00:00.239) 0:00:47.936 ****** 2026-01-01 00:47:37.252266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:37.252287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:37.252300 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.252313 | orchestrator | 2026-01-01 00:47:37.252325 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-01 00:47:37.252337 | orchestrator | Thursday 01 January 2026 00:47:33 +0000 (0:00:00.151) 0:00:48.088 ****** 2026-01-01 00:47:37.252348 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:37.252359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:37.252370 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.252380 | orchestrator | 2026-01-01 00:47:37.252391 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-01 00:47:37.252402 | orchestrator | Thursday 01 January 2026 00:47:34 +0000 (0:00:00.162) 0:00:48.251 ****** 2026-01-01 00:47:37.252413 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:37.252424 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:37.252435 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.252446 | orchestrator | 2026-01-01 00:47:37.252475 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-01 00:47:37.252487 | orchestrator | Thursday 01 January 2026 00:47:34 +0000 (0:00:00.482) 0:00:48.733 ****** 2026-01-01 00:47:37.252498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:37.252509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:37.252520 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.252531 | orchestrator | 2026-01-01 00:47:37.252542 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-01 00:47:37.252553 | orchestrator | Thursday 01 January 2026 00:47:34 +0000 (0:00:00.216) 0:00:48.950 ****** 2026-01-01 00:47:37.252564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:37.252575 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:37.252586 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.252597 | orchestrator | 2026-01-01 00:47:37.252608 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-01 00:47:37.252618 | orchestrator | Thursday 01 January 2026 00:47:34 +0000 (0:00:00.171) 0:00:49.121 ****** 2026-01-01 00:47:37.252629 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:37.252640 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:37.252651 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.252662 | orchestrator | 2026-01-01 00:47:37.252673 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-01 00:47:37.252684 | orchestrator | Thursday 01 January 2026 00:47:35 +0000 (0:00:00.198) 0:00:49.320 ****** 2026-01-01 00:47:37.252696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:37.252719 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:37.252730 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.252741 | orchestrator | 2026-01-01 00:47:37.252790 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-01 00:47:37.252802 | orchestrator | Thursday 01 January 2026 00:47:35 +0000 (0:00:00.180) 0:00:49.501 ****** 2026-01-01 00:47:37.252813 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:37.252824 | orchestrator | 2026-01-01 00:47:37.252834 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-01 00:47:37.252845 | orchestrator | Thursday 01 January 2026 00:47:35 +0000 (0:00:00.590) 0:00:50.091 ****** 2026-01-01 00:47:37.252856 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:37.252867 | orchestrator | 2026-01-01 00:47:37.252878 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-01 00:47:37.252889 | orchestrator | Thursday 01 January 2026 00:47:36 +0000 (0:00:00.622) 0:00:50.714 ****** 2026-01-01 00:47:37.252899 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:37.252910 | orchestrator | 2026-01-01 00:47:37.252921 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-01 00:47:37.252932 | orchestrator | Thursday 01 January 2026 00:47:36 +0000 (0:00:00.180) 0:00:50.895 ****** 2026-01-01 00:47:37.252943 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'vg_name': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'}) 2026-01-01 00:47:37.252955 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'vg_name': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'}) 2026-01-01 00:47:37.252966 | orchestrator | 2026-01-01 00:47:37.252976 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-01 00:47:37.252987 | orchestrator | Thursday 01 January 2026 00:47:36 +0000 (0:00:00.192) 0:00:51.087 ****** 2026-01-01 00:47:37.252998 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:37.253009 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:37.253020 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:37.253031 | orchestrator | 2026-01-01 00:47:37.253041 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-01 00:47:37.253052 | orchestrator | Thursday 01 January 2026 00:47:37 +0000 (0:00:00.208) 0:00:51.295 ****** 2026-01-01 00:47:37.253063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:37.253082 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:43.915647 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:43.915875 | orchestrator | 2026-01-01 00:47:43.915909 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-01 00:47:43.915934 | orchestrator | Thursday 01 January 2026 00:47:37 +0000 (0:00:00.172) 0:00:51.468 ****** 2026-01-01 00:47:43.915955 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'})  2026-01-01 00:47:43.915977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'})  2026-01-01 00:47:43.915998 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:47:43.916054 | orchestrator | 2026-01-01 00:47:43.916068 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-01 00:47:43.916079 | orchestrator | Thursday 01 January 2026 00:47:37 +0000 (0:00:00.158) 0:00:51.627 ****** 2026-01-01 00:47:43.916091 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:47:43.916102 | orchestrator |  "lvm_report": { 2026-01-01 00:47:43.916114 | orchestrator |  "lv": [ 2026-01-01 00:47:43.916125 | orchestrator |  { 2026-01-01 00:47:43.916136 | orchestrator |  "lv_name": "osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef", 2026-01-01 00:47:43.916150 | orchestrator |  "vg_name": "ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef" 2026-01-01 00:47:43.916162 | orchestrator |  }, 2026-01-01 00:47:43.916175 | orchestrator |  { 2026-01-01 00:47:43.916188 | orchestrator |  "lv_name": "osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7", 2026-01-01 00:47:43.916200 | orchestrator |  "vg_name": "ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7" 2026-01-01 00:47:43.916212 | orchestrator |  } 2026-01-01 00:47:43.916225 | orchestrator |  ], 2026-01-01 00:47:43.916237 | orchestrator |  "pv": [ 2026-01-01 00:47:43.916249 | orchestrator |  { 2026-01-01 00:47:43.916263 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-01 00:47:43.916277 | orchestrator |  "vg_name": "ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7" 2026-01-01 00:47:43.916289 | orchestrator |  }, 2026-01-01 00:47:43.916300 | orchestrator |  { 2026-01-01 00:47:43.916310 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-01 00:47:43.916322 | orchestrator |  "vg_name": "ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef" 2026-01-01 00:47:43.916332 | orchestrator |  } 2026-01-01 00:47:43.916343 | orchestrator |  ] 2026-01-01 00:47:43.916354 | orchestrator |  } 2026-01-01 00:47:43.916365 | orchestrator | } 2026-01-01 00:47:43.916384 | orchestrator | 2026-01-01 00:47:43.916402 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-01 00:47:43.916421 | orchestrator | 2026-01-01 00:47:43.916438 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:47:43.916456 | orchestrator | Thursday 01 January 2026 00:47:37 +0000 (0:00:00.563) 0:00:52.190 ****** 2026-01-01 00:47:43.916476 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-01 00:47:43.916497 | orchestrator | 2026-01-01 00:47:43.916516 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:47:43.916535 | orchestrator | Thursday 01 January 2026 00:47:38 +0000 (0:00:00.282) 0:00:52.472 ****** 2026-01-01 00:47:43.916547 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:47:43.916564 | orchestrator | 2026-01-01 00:47:43.916583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.916601 | orchestrator | Thursday 01 January 2026 00:47:38 +0000 (0:00:00.239) 0:00:52.711 ****** 2026-01-01 00:47:43.916620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-01 00:47:43.916638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-01 00:47:43.916655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-01 00:47:43.916672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-01 00:47:43.916691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-01 00:47:43.916711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-01 00:47:43.916729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-01 00:47:43.916777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-01 00:47:43.916797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-01 00:47:43.916832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-01 00:47:43.916852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-01 00:47:43.916870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-01 00:47:43.916887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-01 00:47:43.916898 | orchestrator | 2026-01-01 00:47:43.916914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.916926 | orchestrator | Thursday 01 January 2026 00:47:38 +0000 (0:00:00.450) 0:00:53.161 ****** 2026-01-01 00:47:43.916937 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:43.916948 | orchestrator | 2026-01-01 00:47:43.916959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.916970 | orchestrator | Thursday 01 January 2026 00:47:39 +0000 (0:00:00.206) 0:00:53.367 ****** 2026-01-01 00:47:43.916981 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:43.916992 | orchestrator | 2026-01-01 00:47:43.917003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.917036 | orchestrator | Thursday 01 January 2026 00:47:39 +0000 (0:00:00.228) 0:00:53.596 ****** 2026-01-01 00:47:43.917047 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:43.917058 | orchestrator | 2026-01-01 00:47:43.917074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.917092 | orchestrator | Thursday 01 January 2026 00:47:39 +0000 (0:00:00.202) 0:00:53.799 ****** 2026-01-01 00:47:43.917109 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:43.917127 | orchestrator | 2026-01-01 00:47:43.917145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.917228 | orchestrator | Thursday 01 January 2026 00:47:39 +0000 (0:00:00.197) 0:00:53.996 ****** 2026-01-01 00:47:43.917253 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:43.917273 | orchestrator | 2026-01-01 00:47:43.917292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.917310 | orchestrator | Thursday 01 January 2026 00:47:39 +0000 (0:00:00.219) 0:00:54.216 ****** 2026-01-01 00:47:43.917327 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:43.917344 | orchestrator | 2026-01-01 00:47:43.917363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.917381 | orchestrator | Thursday 01 January 2026 00:47:40 +0000 (0:00:00.664) 0:00:54.880 ****** 2026-01-01 00:47:43.917399 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:43.917418 | orchestrator | 2026-01-01 00:47:43.917438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.917457 | orchestrator | Thursday 01 January 2026 00:47:40 +0000 (0:00:00.218) 0:00:55.099 ****** 2026-01-01 00:47:43.917475 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:43.917490 | orchestrator | 2026-01-01 00:47:43.917502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.917513 | orchestrator | Thursday 01 January 2026 00:47:41 +0000 (0:00:00.214) 0:00:55.313 ****** 2026-01-01 00:47:43.917524 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4) 2026-01-01 00:47:43.917537 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4) 2026-01-01 00:47:43.917552 | orchestrator | 2026-01-01 00:47:43.917570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.917589 | orchestrator | Thursday 01 January 2026 00:47:41 +0000 (0:00:00.500) 0:00:55.814 ****** 2026-01-01 00:47:43.917606 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab) 2026-01-01 00:47:43.917625 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab) 2026-01-01 00:47:43.917643 | orchestrator | 2026-01-01 00:47:43.917662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.917699 | orchestrator | Thursday 01 January 2026 00:47:42 +0000 (0:00:00.571) 0:00:56.386 ****** 2026-01-01 00:47:43.917717 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c) 2026-01-01 00:47:43.917736 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c) 2026-01-01 00:47:43.917788 | orchestrator | 2026-01-01 00:47:43.917807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.917823 | orchestrator | Thursday 01 January 2026 00:47:42 +0000 (0:00:00.455) 0:00:56.841 ****** 2026-01-01 00:47:43.917834 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f) 2026-01-01 00:47:43.917846 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f) 2026-01-01 00:47:43.917866 | orchestrator | 2026-01-01 00:47:43.917949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:47:43.917973 | orchestrator | Thursday 01 January 2026 00:47:43 +0000 (0:00:00.469) 0:00:57.311 ****** 2026-01-01 00:47:43.917990 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:47:43.918006 | orchestrator | 2026-01-01 00:47:43.918104 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:43.918125 | orchestrator | Thursday 01 January 2026 00:47:43 +0000 (0:00:00.336) 0:00:57.647 ****** 2026-01-01 00:47:43.918146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-01 00:47:43.918166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-01 00:47:43.918187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-01 00:47:43.918206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-01 00:47:43.918220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-01 00:47:43.918231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-01 00:47:43.918242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-01 00:47:43.918253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-01 00:47:43.918264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-01 00:47:43.918274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-01 00:47:43.918285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-01 00:47:43.918313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-01 00:47:53.834357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-01 00:47:53.834476 | orchestrator | 2026-01-01 00:47:53.834494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.834506 | orchestrator | Thursday 01 January 2026 00:47:43 +0000 (0:00:00.481) 0:00:58.129 ****** 2026-01-01 00:47:53.834518 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.834530 | orchestrator | 2026-01-01 00:47:53.834541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.834552 | orchestrator | Thursday 01 January 2026 00:47:44 +0000 (0:00:00.209) 0:00:58.339 ****** 2026-01-01 00:47:53.834563 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.834574 | orchestrator | 2026-01-01 00:47:53.834585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.834596 | orchestrator | Thursday 01 January 2026 00:47:44 +0000 (0:00:00.734) 0:00:59.073 ****** 2026-01-01 00:47:53.834635 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.834646 | orchestrator | 2026-01-01 00:47:53.834657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.834668 | orchestrator | Thursday 01 January 2026 00:47:45 +0000 (0:00:00.214) 0:00:59.287 ****** 2026-01-01 00:47:53.834679 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.834690 | orchestrator | 2026-01-01 00:47:53.834701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.834712 | orchestrator | Thursday 01 January 2026 00:47:45 +0000 (0:00:00.240) 0:00:59.528 ****** 2026-01-01 00:47:53.834723 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.834734 | orchestrator | 2026-01-01 00:47:53.834808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.834823 | orchestrator | Thursday 01 January 2026 00:47:45 +0000 (0:00:00.308) 0:00:59.836 ****** 2026-01-01 00:47:53.834834 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.834845 | orchestrator | 2026-01-01 00:47:53.834856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.834867 | orchestrator | Thursday 01 January 2026 00:47:45 +0000 (0:00:00.266) 0:01:00.102 ****** 2026-01-01 00:47:53.834877 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.834888 | orchestrator | 2026-01-01 00:47:53.834899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.834910 | orchestrator | Thursday 01 January 2026 00:47:46 +0000 (0:00:00.206) 0:01:00.308 ****** 2026-01-01 00:47:53.834920 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.834931 | orchestrator | 2026-01-01 00:47:53.834942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.834953 | orchestrator | Thursday 01 January 2026 00:47:46 +0000 (0:00:00.238) 0:01:00.547 ****** 2026-01-01 00:47:53.834980 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-01 00:47:53.834992 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-01 00:47:53.835004 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-01 00:47:53.835014 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-01 00:47:53.835025 | orchestrator | 2026-01-01 00:47:53.835036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.835047 | orchestrator | Thursday 01 January 2026 00:47:47 +0000 (0:00:00.709) 0:01:01.256 ****** 2026-01-01 00:47:53.835058 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835068 | orchestrator | 2026-01-01 00:47:53.835079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.835090 | orchestrator | Thursday 01 January 2026 00:47:47 +0000 (0:00:00.222) 0:01:01.479 ****** 2026-01-01 00:47:53.835101 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835112 | orchestrator | 2026-01-01 00:47:53.835123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.835134 | orchestrator | Thursday 01 January 2026 00:47:47 +0000 (0:00:00.205) 0:01:01.684 ****** 2026-01-01 00:47:53.835144 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835155 | orchestrator | 2026-01-01 00:47:53.835166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:47:53.835177 | orchestrator | Thursday 01 January 2026 00:47:47 +0000 (0:00:00.268) 0:01:01.953 ****** 2026-01-01 00:47:53.835188 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835198 | orchestrator | 2026-01-01 00:47:53.835209 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-01 00:47:53.835220 | orchestrator | Thursday 01 January 2026 00:47:47 +0000 (0:00:00.263) 0:01:02.216 ****** 2026-01-01 00:47:53.835231 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835242 | orchestrator | 2026-01-01 00:47:53.835253 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-01 00:47:53.835263 | orchestrator | Thursday 01 January 2026 00:47:48 +0000 (0:00:00.504) 0:01:02.720 ****** 2026-01-01 00:47:53.835274 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a4d57ac9-6a13-5c0f-8510-006bddd85eb9'}}) 2026-01-01 00:47:53.835297 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3a96b51d-dfb4-5025-9237-a7295cf21147'}}) 2026-01-01 00:47:53.835308 | orchestrator | 2026-01-01 00:47:53.835319 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-01 00:47:53.835331 | orchestrator | Thursday 01 January 2026 00:47:48 +0000 (0:00:00.198) 0:01:02.919 ****** 2026-01-01 00:47:53.835342 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'}) 2026-01-01 00:47:53.835354 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'}) 2026-01-01 00:47:53.835364 | orchestrator | 2026-01-01 00:47:53.835376 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-01 00:47:53.835405 | orchestrator | Thursday 01 January 2026 00:47:50 +0000 (0:00:01.831) 0:01:04.750 ****** 2026-01-01 00:47:53.835417 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:47:53.835429 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:47:53.835440 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835451 | orchestrator | 2026-01-01 00:47:53.835461 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-01 00:47:53.835473 | orchestrator | Thursday 01 January 2026 00:47:50 +0000 (0:00:00.252) 0:01:05.003 ****** 2026-01-01 00:47:53.835484 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'}) 2026-01-01 00:47:53.835495 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'}) 2026-01-01 00:47:53.835506 | orchestrator | 2026-01-01 00:47:53.835517 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-01 00:47:53.835527 | orchestrator | Thursday 01 January 2026 00:47:52 +0000 (0:00:01.362) 0:01:06.365 ****** 2026-01-01 00:47:53.835538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:47:53.835549 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:47:53.835560 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835571 | orchestrator | 2026-01-01 00:47:53.835582 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-01 00:47:53.835593 | orchestrator | Thursday 01 January 2026 00:47:52 +0000 (0:00:00.164) 0:01:06.530 ****** 2026-01-01 00:47:53.835603 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835614 | orchestrator | 2026-01-01 00:47:53.835625 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-01 00:47:53.835636 | orchestrator | Thursday 01 January 2026 00:47:52 +0000 (0:00:00.165) 0:01:06.696 ****** 2026-01-01 00:47:53.835652 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:47:53.835663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:47:53.835674 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835685 | orchestrator | 2026-01-01 00:47:53.835696 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-01 00:47:53.835716 | orchestrator | Thursday 01 January 2026 00:47:52 +0000 (0:00:00.159) 0:01:06.855 ****** 2026-01-01 00:47:53.835727 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835737 | orchestrator | 2026-01-01 00:47:53.835770 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-01 00:47:53.835782 | orchestrator | Thursday 01 January 2026 00:47:52 +0000 (0:00:00.158) 0:01:07.014 ****** 2026-01-01 00:47:53.835793 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:47:53.835804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:47:53.835815 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835826 | orchestrator | 2026-01-01 00:47:53.835837 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-01 00:47:53.835848 | orchestrator | Thursday 01 January 2026 00:47:52 +0000 (0:00:00.182) 0:01:07.196 ****** 2026-01-01 00:47:53.835858 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835869 | orchestrator | 2026-01-01 00:47:53.835880 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-01 00:47:53.835891 | orchestrator | Thursday 01 January 2026 00:47:53 +0000 (0:00:00.145) 0:01:07.342 ****** 2026-01-01 00:47:53.835902 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:47:53.835913 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:47:53.835924 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:47:53.835935 | orchestrator | 2026-01-01 00:47:53.835945 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-01 00:47:53.835956 | orchestrator | Thursday 01 January 2026 00:47:53 +0000 (0:00:00.156) 0:01:07.498 ****** 2026-01-01 00:47:53.835967 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:47:53.835978 | orchestrator | 2026-01-01 00:47:53.835989 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-01 00:47:53.836000 | orchestrator | Thursday 01 January 2026 00:47:53 +0000 (0:00:00.378) 0:01:07.877 ****** 2026-01-01 00:47:53.836018 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:00.463480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:00.463591 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.463607 | orchestrator | 2026-01-01 00:48:00.463620 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-01 00:48:00.463633 | orchestrator | Thursday 01 January 2026 00:47:53 +0000 (0:00:00.178) 0:01:08.055 ****** 2026-01-01 00:48:00.463646 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:00.463657 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:00.463669 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.463680 | orchestrator | 2026-01-01 00:48:00.463691 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-01 00:48:00.463702 | orchestrator | Thursday 01 January 2026 00:47:54 +0000 (0:00:00.168) 0:01:08.224 ****** 2026-01-01 00:48:00.463714 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:00.463725 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:00.463823 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.463846 | orchestrator | 2026-01-01 00:48:00.463863 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-01 00:48:00.463875 | orchestrator | Thursday 01 January 2026 00:47:54 +0000 (0:00:00.183) 0:01:08.408 ****** 2026-01-01 00:48:00.463886 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.463897 | orchestrator | 2026-01-01 00:48:00.463908 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-01 00:48:00.463919 | orchestrator | Thursday 01 January 2026 00:47:54 +0000 (0:00:00.137) 0:01:08.545 ****** 2026-01-01 00:48:00.463930 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.463940 | orchestrator | 2026-01-01 00:48:00.463951 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-01 00:48:00.463962 | orchestrator | Thursday 01 January 2026 00:47:54 +0000 (0:00:00.143) 0:01:08.688 ****** 2026-01-01 00:48:00.463973 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.463984 | orchestrator | 2026-01-01 00:48:00.463995 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-01 00:48:00.464008 | orchestrator | Thursday 01 January 2026 00:47:54 +0000 (0:00:00.144) 0:01:08.833 ****** 2026-01-01 00:48:00.464022 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:48:00.464034 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-01 00:48:00.464048 | orchestrator | } 2026-01-01 00:48:00.464060 | orchestrator | 2026-01-01 00:48:00.464073 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-01 00:48:00.464086 | orchestrator | Thursday 01 January 2026 00:47:54 +0000 (0:00:00.155) 0:01:08.988 ****** 2026-01-01 00:48:00.464098 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:48:00.464111 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-01 00:48:00.464124 | orchestrator | } 2026-01-01 00:48:00.464137 | orchestrator | 2026-01-01 00:48:00.464150 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-01 00:48:00.464163 | orchestrator | Thursday 01 January 2026 00:47:54 +0000 (0:00:00.155) 0:01:09.143 ****** 2026-01-01 00:48:00.464177 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:48:00.464190 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-01 00:48:00.464203 | orchestrator | } 2026-01-01 00:48:00.464216 | orchestrator | 2026-01-01 00:48:00.464229 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-01 00:48:00.464242 | orchestrator | Thursday 01 January 2026 00:47:55 +0000 (0:00:00.166) 0:01:09.309 ****** 2026-01-01 00:48:00.464255 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:48:00.464269 | orchestrator | 2026-01-01 00:48:00.464282 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-01 00:48:00.464292 | orchestrator | Thursday 01 January 2026 00:47:55 +0000 (0:00:00.635) 0:01:09.945 ****** 2026-01-01 00:48:00.464303 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:48:00.464314 | orchestrator | 2026-01-01 00:48:00.464325 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-01 00:48:00.464336 | orchestrator | Thursday 01 January 2026 00:47:56 +0000 (0:00:00.511) 0:01:10.457 ****** 2026-01-01 00:48:00.464346 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:48:00.464357 | orchestrator | 2026-01-01 00:48:00.464368 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-01 00:48:00.464379 | orchestrator | Thursday 01 January 2026 00:47:57 +0000 (0:00:00.795) 0:01:11.253 ****** 2026-01-01 00:48:00.464390 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:48:00.464401 | orchestrator | 2026-01-01 00:48:00.464411 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-01 00:48:00.464422 | orchestrator | Thursday 01 January 2026 00:47:57 +0000 (0:00:00.159) 0:01:11.412 ****** 2026-01-01 00:48:00.464433 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.464444 | orchestrator | 2026-01-01 00:48:00.464455 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-01 00:48:00.464474 | orchestrator | Thursday 01 January 2026 00:47:57 +0000 (0:00:00.147) 0:01:11.559 ****** 2026-01-01 00:48:00.464485 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.464496 | orchestrator | 2026-01-01 00:48:00.464507 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-01 00:48:00.464538 | orchestrator | Thursday 01 January 2026 00:47:57 +0000 (0:00:00.118) 0:01:11.677 ****** 2026-01-01 00:48:00.464549 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:48:00.464560 | orchestrator |  "vgs_report": { 2026-01-01 00:48:00.464572 | orchestrator |  "vg": [] 2026-01-01 00:48:00.464600 | orchestrator |  } 2026-01-01 00:48:00.464611 | orchestrator | } 2026-01-01 00:48:00.464622 | orchestrator | 2026-01-01 00:48:00.464633 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-01 00:48:00.464644 | orchestrator | Thursday 01 January 2026 00:47:57 +0000 (0:00:00.160) 0:01:11.838 ****** 2026-01-01 00:48:00.464655 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.464666 | orchestrator | 2026-01-01 00:48:00.464677 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-01 00:48:00.464688 | orchestrator | Thursday 01 January 2026 00:47:57 +0000 (0:00:00.173) 0:01:12.012 ****** 2026-01-01 00:48:00.464699 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.464710 | orchestrator | 2026-01-01 00:48:00.464721 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-01 00:48:00.464732 | orchestrator | Thursday 01 January 2026 00:47:57 +0000 (0:00:00.151) 0:01:12.164 ****** 2026-01-01 00:48:00.464743 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.464784 | orchestrator | 2026-01-01 00:48:00.464799 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-01 00:48:00.464810 | orchestrator | Thursday 01 January 2026 00:47:58 +0000 (0:00:00.145) 0:01:12.310 ****** 2026-01-01 00:48:00.464821 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.464832 | orchestrator | 2026-01-01 00:48:00.464843 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-01 00:48:00.464854 | orchestrator | Thursday 01 January 2026 00:47:58 +0000 (0:00:00.148) 0:01:12.459 ****** 2026-01-01 00:48:00.464865 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.464876 | orchestrator | 2026-01-01 00:48:00.464887 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-01 00:48:00.464897 | orchestrator | Thursday 01 January 2026 00:47:58 +0000 (0:00:00.146) 0:01:12.606 ****** 2026-01-01 00:48:00.464908 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.464919 | orchestrator | 2026-01-01 00:48:00.464930 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-01 00:48:00.464941 | orchestrator | Thursday 01 January 2026 00:47:58 +0000 (0:00:00.143) 0:01:12.749 ****** 2026-01-01 00:48:00.464952 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.464963 | orchestrator | 2026-01-01 00:48:00.464974 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-01 00:48:00.464985 | orchestrator | Thursday 01 January 2026 00:47:58 +0000 (0:00:00.145) 0:01:12.895 ****** 2026-01-01 00:48:00.464996 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.465006 | orchestrator | 2026-01-01 00:48:00.465017 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-01 00:48:00.465028 | orchestrator | Thursday 01 January 2026 00:47:59 +0000 (0:00:00.386) 0:01:13.282 ****** 2026-01-01 00:48:00.465039 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.465050 | orchestrator | 2026-01-01 00:48:00.465066 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-01 00:48:00.465078 | orchestrator | Thursday 01 January 2026 00:47:59 +0000 (0:00:00.156) 0:01:13.438 ****** 2026-01-01 00:48:00.465089 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.465099 | orchestrator | 2026-01-01 00:48:00.465111 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-01 00:48:00.465129 | orchestrator | Thursday 01 January 2026 00:47:59 +0000 (0:00:00.161) 0:01:13.599 ****** 2026-01-01 00:48:00.465140 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.465151 | orchestrator | 2026-01-01 00:48:00.465163 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-01 00:48:00.465174 | orchestrator | Thursday 01 January 2026 00:47:59 +0000 (0:00:00.150) 0:01:13.750 ****** 2026-01-01 00:48:00.465185 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.465196 | orchestrator | 2026-01-01 00:48:00.465207 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-01 00:48:00.465218 | orchestrator | Thursday 01 January 2026 00:47:59 +0000 (0:00:00.148) 0:01:13.899 ****** 2026-01-01 00:48:00.465229 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.465239 | orchestrator | 2026-01-01 00:48:00.465250 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-01 00:48:00.465261 | orchestrator | Thursday 01 January 2026 00:47:59 +0000 (0:00:00.149) 0:01:14.048 ****** 2026-01-01 00:48:00.465272 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.465283 | orchestrator | 2026-01-01 00:48:00.465294 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-01 00:48:00.465305 | orchestrator | Thursday 01 January 2026 00:47:59 +0000 (0:00:00.132) 0:01:14.181 ****** 2026-01-01 00:48:00.465317 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:00.465328 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:00.465339 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.465350 | orchestrator | 2026-01-01 00:48:00.465361 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-01 00:48:00.465372 | orchestrator | Thursday 01 January 2026 00:48:00 +0000 (0:00:00.170) 0:01:14.352 ****** 2026-01-01 00:48:00.465383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:00.465394 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:00.465405 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:00.465416 | orchestrator | 2026-01-01 00:48:00.465427 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-01 00:48:00.465439 | orchestrator | Thursday 01 January 2026 00:48:00 +0000 (0:00:00.164) 0:01:14.516 ****** 2026-01-01 00:48:00.465457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:03.638581 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:03.641660 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:03.641689 | orchestrator | 2026-01-01 00:48:03.641702 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-01 00:48:03.641714 | orchestrator | Thursday 01 January 2026 00:48:00 +0000 (0:00:00.168) 0:01:14.684 ****** 2026-01-01 00:48:03.641725 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:03.641736 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:03.641784 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:03.641797 | orchestrator | 2026-01-01 00:48:03.641807 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-01 00:48:03.641859 | orchestrator | Thursday 01 January 2026 00:48:00 +0000 (0:00:00.169) 0:01:14.853 ****** 2026-01-01 00:48:03.641871 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:03.641881 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:03.641891 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:03.641901 | orchestrator | 2026-01-01 00:48:03.641911 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-01 00:48:03.641921 | orchestrator | Thursday 01 January 2026 00:48:00 +0000 (0:00:00.165) 0:01:15.019 ****** 2026-01-01 00:48:03.641931 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:03.641941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:03.641968 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:03.641979 | orchestrator | 2026-01-01 00:48:03.641988 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-01 00:48:03.641998 | orchestrator | Thursday 01 January 2026 00:48:01 +0000 (0:00:00.434) 0:01:15.454 ****** 2026-01-01 00:48:03.642008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:03.642053 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:03.642066 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:03.642076 | orchestrator | 2026-01-01 00:48:03.642085 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-01 00:48:03.642095 | orchestrator | Thursday 01 January 2026 00:48:01 +0000 (0:00:00.162) 0:01:15.616 ****** 2026-01-01 00:48:03.642105 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:03.642115 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:03.642125 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:03.642134 | orchestrator | 2026-01-01 00:48:03.642144 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-01 00:48:03.642154 | orchestrator | Thursday 01 January 2026 00:48:01 +0000 (0:00:00.159) 0:01:15.776 ****** 2026-01-01 00:48:03.642164 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:48:03.642175 | orchestrator | 2026-01-01 00:48:03.642185 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-01 00:48:03.642194 | orchestrator | Thursday 01 January 2026 00:48:02 +0000 (0:00:00.524) 0:01:16.300 ****** 2026-01-01 00:48:03.642204 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:48:03.642214 | orchestrator | 2026-01-01 00:48:03.642224 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-01 00:48:03.642234 | orchestrator | Thursday 01 January 2026 00:48:02 +0000 (0:00:00.547) 0:01:16.848 ****** 2026-01-01 00:48:03.642243 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:48:03.642253 | orchestrator | 2026-01-01 00:48:03.642263 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-01 00:48:03.642273 | orchestrator | Thursday 01 January 2026 00:48:02 +0000 (0:00:00.165) 0:01:17.013 ****** 2026-01-01 00:48:03.642283 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'vg_name': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'}) 2026-01-01 00:48:03.642294 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'vg_name': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'}) 2026-01-01 00:48:03.642311 | orchestrator | 2026-01-01 00:48:03.642321 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-01 00:48:03.642331 | orchestrator | Thursday 01 January 2026 00:48:02 +0000 (0:00:00.179) 0:01:17.193 ****** 2026-01-01 00:48:03.642368 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:03.642379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:03.642389 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:03.642399 | orchestrator | 2026-01-01 00:48:03.642409 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-01 00:48:03.642419 | orchestrator | Thursday 01 January 2026 00:48:03 +0000 (0:00:00.160) 0:01:17.353 ****** 2026-01-01 00:48:03.642429 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:03.642439 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:03.642449 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:03.642458 | orchestrator | 2026-01-01 00:48:03.642468 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-01 00:48:03.642478 | orchestrator | Thursday 01 January 2026 00:48:03 +0000 (0:00:00.170) 0:01:17.524 ****** 2026-01-01 00:48:03.642487 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'})  2026-01-01 00:48:03.642497 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'})  2026-01-01 00:48:03.642507 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:03.642517 | orchestrator | 2026-01-01 00:48:03.642526 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-01 00:48:03.642536 | orchestrator | Thursday 01 January 2026 00:48:03 +0000 (0:00:00.165) 0:01:17.689 ****** 2026-01-01 00:48:03.642546 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:48:03.642556 | orchestrator |  "lvm_report": { 2026-01-01 00:48:03.642565 | orchestrator |  "lv": [ 2026-01-01 00:48:03.642575 | orchestrator |  { 2026-01-01 00:48:03.642590 | orchestrator |  "lv_name": "osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147", 2026-01-01 00:48:03.642601 | orchestrator |  "vg_name": "ceph-3a96b51d-dfb4-5025-9237-a7295cf21147" 2026-01-01 00:48:03.642611 | orchestrator |  }, 2026-01-01 00:48:03.642621 | orchestrator |  { 2026-01-01 00:48:03.642631 | orchestrator |  "lv_name": "osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9", 2026-01-01 00:48:03.642640 | orchestrator |  "vg_name": "ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9" 2026-01-01 00:48:03.642650 | orchestrator |  } 2026-01-01 00:48:03.642660 | orchestrator |  ], 2026-01-01 00:48:03.642669 | orchestrator |  "pv": [ 2026-01-01 00:48:03.642679 | orchestrator |  { 2026-01-01 00:48:03.642689 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-01 00:48:03.642699 | orchestrator |  "vg_name": "ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9" 2026-01-01 00:48:03.642708 | orchestrator |  }, 2026-01-01 00:48:03.642718 | orchestrator |  { 2026-01-01 00:48:03.642728 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-01 00:48:03.642737 | orchestrator |  "vg_name": "ceph-3a96b51d-dfb4-5025-9237-a7295cf21147" 2026-01-01 00:48:03.642865 | orchestrator |  } 2026-01-01 00:48:03.642882 | orchestrator |  ] 2026-01-01 00:48:03.642911 | orchestrator |  } 2026-01-01 00:48:03.642929 | orchestrator | } 2026-01-01 00:48:03.642944 | orchestrator | 2026-01-01 00:48:03.642960 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:48:03.642977 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-01 00:48:03.643002 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-01 00:48:03.643018 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-01 00:48:03.643035 | orchestrator | 2026-01-01 00:48:03.643052 | orchestrator | 2026-01-01 00:48:03.643070 | orchestrator | 2026-01-01 00:48:03.643088 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:48:03.643105 | orchestrator | Thursday 01 January 2026 00:48:03 +0000 (0:00:00.146) 0:01:17.836 ****** 2026-01-01 00:48:03.643121 | orchestrator | =============================================================================== 2026-01-01 00:48:03.643132 | orchestrator | Create block VGs -------------------------------------------------------- 5.82s 2026-01-01 00:48:03.643142 | orchestrator | Create block LVs -------------------------------------------------------- 4.11s 2026-01-01 00:48:03.643151 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.91s 2026-01-01 00:48:03.643161 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.91s 2026-01-01 00:48:03.643171 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.68s 2026-01-01 00:48:03.643181 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.68s 2026-01-01 00:48:03.643190 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2026-01-01 00:48:03.643200 | orchestrator | Add known partitions to the list of available block devices ------------- 1.47s 2026-01-01 00:48:03.643222 | orchestrator | Add known links to the list of available block devices ------------------ 1.38s 2026-01-01 00:48:04.145538 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2026-01-01 00:48:04.145642 | orchestrator | Print LVM report data --------------------------------------------------- 1.04s 2026-01-01 00:48:04.145656 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-01-01 00:48:04.145667 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-01-01 00:48:04.145679 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.87s 2026-01-01 00:48:04.145690 | orchestrator | Check whether ceph_db_wal_devices is used exclusively ------------------- 0.86s 2026-01-01 00:48:04.145700 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.83s 2026-01-01 00:48:04.145711 | orchestrator | Print number of OSDs wanted per DB+WAL VG ------------------------------- 0.83s 2026-01-01 00:48:04.145722 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.80s 2026-01-01 00:48:04.145733 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.79s 2026-01-01 00:48:04.145744 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-01-01 00:48:17.047993 | orchestrator | 2026-01-01 00:48:17 | INFO  | Task c691c88a-1bce-4667-9819-a3b0df796533 (facts) was prepared for execution. 2026-01-01 00:48:17.048116 | orchestrator | 2026-01-01 00:48:17 | INFO  | It takes a moment until task c691c88a-1bce-4667-9819-a3b0df796533 (facts) has been started and output is visible here. 2026-01-01 00:48:29.793922 | orchestrator | 2026-01-01 00:48:29.794107 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-01 00:48:29.794128 | orchestrator | 2026-01-01 00:48:29.794141 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-01 00:48:29.794152 | orchestrator | Thursday 01 January 2026 00:48:21 +0000 (0:00:00.290) 0:00:00.290 ****** 2026-01-01 00:48:29.794193 | orchestrator | ok: [testbed-manager] 2026-01-01 00:48:29.794206 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:48:29.794217 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:48:29.794228 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:48:29.794239 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:48:29.794249 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:48:29.794260 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:48:29.794271 | orchestrator | 2026-01-01 00:48:29.794283 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-01 00:48:29.794295 | orchestrator | Thursday 01 January 2026 00:48:22 +0000 (0:00:01.068) 0:00:01.359 ****** 2026-01-01 00:48:29.794306 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:48:29.794317 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:48:29.794328 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:48:29.794339 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:48:29.794350 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:48:29.794361 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:48:29.794372 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:29.794385 | orchestrator | 2026-01-01 00:48:29.794398 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-01 00:48:29.794411 | orchestrator | 2026-01-01 00:48:29.794423 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:48:29.794437 | orchestrator | Thursday 01 January 2026 00:48:24 +0000 (0:00:01.147) 0:00:02.507 ****** 2026-01-01 00:48:29.794449 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:48:29.794462 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:48:29.794474 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:48:29.794487 | orchestrator | ok: [testbed-manager] 2026-01-01 00:48:29.794499 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:48:29.794511 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:48:29.794523 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:48:29.794536 | orchestrator | 2026-01-01 00:48:29.794549 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-01 00:48:29.794561 | orchestrator | 2026-01-01 00:48:29.794574 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-01 00:48:29.794587 | orchestrator | Thursday 01 January 2026 00:48:28 +0000 (0:00:04.957) 0:00:07.465 ****** 2026-01-01 00:48:29.794600 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:48:29.794612 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:48:29.794625 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:48:29.794637 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:48:29.794649 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:48:29.794661 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:48:29.794675 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:29.794687 | orchestrator | 2026-01-01 00:48:29.794698 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:48:29.794710 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:48:29.794723 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:48:29.794734 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:48:29.794745 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:48:29.794850 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:48:29.794862 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:48:29.794882 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:48:29.794893 | orchestrator | 2026-01-01 00:48:29.794905 | orchestrator | 2026-01-01 00:48:29.794915 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:48:29.794926 | orchestrator | Thursday 01 January 2026 00:48:29 +0000 (0:00:00.476) 0:00:07.941 ****** 2026-01-01 00:48:29.794938 | orchestrator | =============================================================================== 2026-01-01 00:48:29.794948 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.96s 2026-01-01 00:48:29.794959 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.15s 2026-01-01 00:48:29.794970 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.07s 2026-01-01 00:48:29.794982 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-01-01 00:48:42.199108 | orchestrator | 2026-01-01 00:48:42 | INFO  | Task 2f502771-fcae-4164-8e8e-b853db18aed4 (frr) was prepared for execution. 2026-01-01 00:48:42.199236 | orchestrator | 2026-01-01 00:48:42 | INFO  | It takes a moment until task 2f502771-fcae-4164-8e8e-b853db18aed4 (frr) has been started and output is visible here. 2026-01-01 00:49:08.709893 | orchestrator | 2026-01-01 00:49:08.710077 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-01 00:49:08.710100 | orchestrator | 2026-01-01 00:49:08.710114 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-01 00:49:08.710193 | orchestrator | Thursday 01 January 2026 00:48:46 +0000 (0:00:00.227) 0:00:00.227 ****** 2026-01-01 00:49:08.710208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:49:08.710222 | orchestrator | 2026-01-01 00:49:08.710233 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-01 00:49:08.710245 | orchestrator | Thursday 01 January 2026 00:48:47 +0000 (0:00:00.214) 0:00:00.441 ****** 2026-01-01 00:49:08.710256 | orchestrator | changed: [testbed-manager] 2026-01-01 00:49:08.710282 | orchestrator | 2026-01-01 00:49:08.710293 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-01 00:49:08.710323 | orchestrator | Thursday 01 January 2026 00:48:48 +0000 (0:00:01.137) 0:00:01.579 ****** 2026-01-01 00:49:08.710334 | orchestrator | changed: [testbed-manager] 2026-01-01 00:49:08.710345 | orchestrator | 2026-01-01 00:49:08.710357 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-01 00:49:08.710368 | orchestrator | Thursday 01 January 2026 00:48:58 +0000 (0:00:10.616) 0:00:12.196 ****** 2026-01-01 00:49:08.710379 | orchestrator | ok: [testbed-manager] 2026-01-01 00:49:08.710394 | orchestrator | 2026-01-01 00:49:08.710407 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-01 00:49:08.710421 | orchestrator | Thursday 01 January 2026 00:48:59 +0000 (0:00:01.094) 0:00:13.290 ****** 2026-01-01 00:49:08.710434 | orchestrator | changed: [testbed-manager] 2026-01-01 00:49:08.710446 | orchestrator | 2026-01-01 00:49:08.710460 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-01 00:49:08.710473 | orchestrator | Thursday 01 January 2026 00:49:00 +0000 (0:00:01.077) 0:00:14.368 ****** 2026-01-01 00:49:08.710486 | orchestrator | ok: [testbed-manager] 2026-01-01 00:49:08.710499 | orchestrator | 2026-01-01 00:49:08.710512 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-01 00:49:08.710525 | orchestrator | Thursday 01 January 2026 00:49:02 +0000 (0:00:01.286) 0:00:15.654 ****** 2026-01-01 00:49:08.710539 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:49:08.710551 | orchestrator | 2026-01-01 00:49:08.710564 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-01 00:49:08.710578 | orchestrator | Thursday 01 January 2026 00:49:02 +0000 (0:00:00.146) 0:00:15.801 ****** 2026-01-01 00:49:08.710615 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:49:08.710630 | orchestrator | 2026-01-01 00:49:08.710643 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-01 00:49:08.710657 | orchestrator | Thursday 01 January 2026 00:49:02 +0000 (0:00:00.154) 0:00:15.956 ****** 2026-01-01 00:49:08.710670 | orchestrator | changed: [testbed-manager] 2026-01-01 00:49:08.710683 | orchestrator | 2026-01-01 00:49:08.710696 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-01 00:49:08.710709 | orchestrator | Thursday 01 January 2026 00:49:03 +0000 (0:00:00.945) 0:00:16.901 ****** 2026-01-01 00:49:08.710722 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-01 00:49:08.710735 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-01 00:49:08.710781 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-01 00:49:08.710793 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-01 00:49:08.710804 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-01 00:49:08.710815 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-01 00:49:08.710826 | orchestrator | 2026-01-01 00:49:08.710837 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-01 00:49:08.710848 | orchestrator | Thursday 01 January 2026 00:49:05 +0000 (0:00:01.968) 0:00:18.870 ****** 2026-01-01 00:49:08.710859 | orchestrator | ok: [testbed-manager] 2026-01-01 00:49:08.710870 | orchestrator | 2026-01-01 00:49:08.710881 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-01 00:49:08.710892 | orchestrator | Thursday 01 January 2026 00:49:06 +0000 (0:00:01.427) 0:00:20.298 ****** 2026-01-01 00:49:08.710903 | orchestrator | changed: [testbed-manager] 2026-01-01 00:49:08.710914 | orchestrator | 2026-01-01 00:49:08.710925 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:49:08.710936 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:49:08.710947 | orchestrator | 2026-01-01 00:49:08.710958 | orchestrator | 2026-01-01 00:49:08.710969 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:49:08.710980 | orchestrator | Thursday 01 January 2026 00:49:08 +0000 (0:00:01.427) 0:00:21.726 ****** 2026-01-01 00:49:08.710991 | orchestrator | =============================================================================== 2026-01-01 00:49:08.711002 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.62s 2026-01-01 00:49:08.711013 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.97s 2026-01-01 00:49:08.711024 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.43s 2026-01-01 00:49:08.711034 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.43s 2026-01-01 00:49:08.711046 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.29s 2026-01-01 00:49:08.711074 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.14s 2026-01-01 00:49:08.711086 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.09s 2026-01-01 00:49:08.711097 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.08s 2026-01-01 00:49:08.711108 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.95s 2026-01-01 00:49:08.711119 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2026-01-01 00:49:08.711130 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-01-01 00:49:08.711141 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-01-01 00:49:09.092560 | orchestrator | 2026-01-01 00:49:09.094879 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Jan 1 00:49:09 UTC 2026 2026-01-01 00:49:09.094912 | orchestrator | 2026-01-01 00:49:11.156149 | orchestrator | 2026-01-01 00:49:11 | INFO  | Collection nutshell is prepared for execution 2026-01-01 00:49:11.156275 | orchestrator | 2026-01-01 00:49:11 | INFO  | A [0] - dotfiles 2026-01-01 00:49:21.271637 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [0] - homer 2026-01-01 00:49:21.271823 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [0] - netdata 2026-01-01 00:49:21.271843 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [0] - openstackclient 2026-01-01 00:49:21.271856 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [0] - phpmyadmin 2026-01-01 00:49:21.271944 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [0] - common 2026-01-01 00:49:21.273789 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [1] -- loadbalancer 2026-01-01 00:49:21.273820 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [2] --- opensearch 2026-01-01 00:49:21.273828 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [2] --- mariadb-ng 2026-01-01 00:49:21.273835 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [3] ---- horizon 2026-01-01 00:49:21.273842 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [3] ---- keystone 2026-01-01 00:49:21.273849 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [4] ----- neutron 2026-01-01 00:49:21.274265 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [5] ------ wait-for-nova 2026-01-01 00:49:21.274279 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [6] ------- octavia 2026-01-01 00:49:21.275289 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [4] ----- barbican 2026-01-01 00:49:21.275305 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [4] ----- designate 2026-01-01 00:49:21.275312 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [4] ----- ironic 2026-01-01 00:49:21.275612 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [4] ----- placement 2026-01-01 00:49:21.275624 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [4] ----- magnum 2026-01-01 00:49:21.276362 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [1] -- openvswitch 2026-01-01 00:49:21.276375 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [2] --- ovn 2026-01-01 00:49:21.276384 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [1] -- memcached 2026-01-01 00:49:21.276393 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [1] -- redis 2026-01-01 00:49:21.276802 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [1] -- rabbitmq-ng 2026-01-01 00:49:21.276819 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [0] - kubernetes 2026-01-01 00:49:21.278969 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [1] -- kubeconfig 2026-01-01 00:49:21.279004 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [1] -- copy-kubeconfig 2026-01-01 00:49:21.279245 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [0] - ceph 2026-01-01 00:49:21.281001 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [1] -- ceph-pools 2026-01-01 00:49:21.281041 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [2] --- copy-ceph-keys 2026-01-01 00:49:21.281161 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [3] ---- cephclient 2026-01-01 00:49:21.281286 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-01 00:49:21.281306 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [4] ----- wait-for-keystone 2026-01-01 00:49:21.281318 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-01 00:49:21.281337 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [5] ------ glance 2026-01-01 00:49:21.281495 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [5] ------ cinder 2026-01-01 00:49:21.281514 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [5] ------ nova 2026-01-01 00:49:21.281531 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [4] ----- prometheus 2026-01-01 00:49:21.281543 | orchestrator | 2026-01-01 00:49:21 | INFO  | A [5] ------ grafana 2026-01-01 00:49:21.482645 | orchestrator | 2026-01-01 00:49:21 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-01 00:49:21.482783 | orchestrator | 2026-01-01 00:49:21 | INFO  | Tasks are running in the background 2026-01-01 00:49:24.760595 | orchestrator | 2026-01-01 00:49:24 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-01 00:49:26.884104 | orchestrator | 2026-01-01 00:49:26 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:49:26.885500 | orchestrator | 2026-01-01 00:49:26 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:49:26.886135 | orchestrator | 2026-01-01 00:49:26 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:49:26.887495 | orchestrator | 2026-01-01 00:49:26 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:49:26.891608 | orchestrator | 2026-01-01 00:49:26 | INFO  | Task 5840b1f7-efb9-4d8b-9586-67be9d6d5658 is in state STARTED 2026-01-01 00:49:26.891903 | orchestrator | 2026-01-01 00:49:26 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:49:26.892402 | orchestrator | 2026-01-01 00:49:26 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:49:26.892424 | orchestrator | 2026-01-01 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:29.937411 | orchestrator | 2026-01-01 00:49:29 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:49:29.940709 | orchestrator | 2026-01-01 00:49:29 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:49:29.943719 | orchestrator | 2026-01-01 00:49:29 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:49:29.948067 | orchestrator | 2026-01-01 00:49:29 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:49:29.949430 | orchestrator | 2026-01-01 00:49:29 | INFO  | Task 5840b1f7-efb9-4d8b-9586-67be9d6d5658 is in state STARTED 2026-01-01 00:49:29.951431 | orchestrator | 2026-01-01 00:49:29 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:49:29.952181 | orchestrator | 2026-01-01 00:49:29 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:49:29.952239 | orchestrator | 2026-01-01 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:33.009245 | orchestrator | 2026-01-01 00:49:33 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:49:33.013199 | orchestrator | 2026-01-01 00:49:33 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:49:33.015814 | orchestrator | 2026-01-01 00:49:33 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:49:33.016147 | orchestrator | 2026-01-01 00:49:33 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:49:33.016781 | orchestrator | 2026-01-01 00:49:33 | INFO  | Task 5840b1f7-efb9-4d8b-9586-67be9d6d5658 is in state STARTED 2026-01-01 00:49:33.019637 | orchestrator | 2026-01-01 00:49:33 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:49:33.021570 | orchestrator | 2026-01-01 00:49:33 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:49:33.021621 | orchestrator | 2026-01-01 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:36.124645 | orchestrator | 2026-01-01 00:49:36 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:49:36.125155 | orchestrator | 2026-01-01 00:49:36 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:49:36.130104 | orchestrator | 2026-01-01 00:49:36 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:49:36.130943 | orchestrator | 2026-01-01 00:49:36 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:49:36.131931 | orchestrator | 2026-01-01 00:49:36 | INFO  | Task 5840b1f7-efb9-4d8b-9586-67be9d6d5658 is in state STARTED 2026-01-01 00:49:36.133659 | orchestrator | 2026-01-01 00:49:36 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:49:36.135011 | orchestrator | 2026-01-01 00:49:36 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:49:36.135062 | orchestrator | 2026-01-01 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:39.232679 | orchestrator | 2026-01-01 00:49:39 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:49:39.232841 | orchestrator | 2026-01-01 00:49:39 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:49:39.233573 | orchestrator | 2026-01-01 00:49:39 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:49:39.234627 | orchestrator | 2026-01-01 00:49:39 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:49:39.234925 | orchestrator | 2026-01-01 00:49:39 | INFO  | Task 5840b1f7-efb9-4d8b-9586-67be9d6d5658 is in state STARTED 2026-01-01 00:49:39.235719 | orchestrator | 2026-01-01 00:49:39 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:49:39.236535 | orchestrator | 2026-01-01 00:49:39 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:49:39.236559 | orchestrator | 2026-01-01 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:42.346428 | orchestrator | 2026-01-01 00:49:42 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:49:42.350468 | orchestrator | 2026-01-01 00:49:42 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:49:42.350535 | orchestrator | 2026-01-01 00:49:42 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:49:42.350549 | orchestrator | 2026-01-01 00:49:42 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:49:42.350561 | orchestrator | 2026-01-01 00:49:42 | INFO  | Task 5840b1f7-efb9-4d8b-9586-67be9d6d5658 is in state STARTED 2026-01-01 00:49:42.352443 | orchestrator | 2026-01-01 00:49:42 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:49:42.353246 | orchestrator | 2026-01-01 00:49:42 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:49:42.353270 | orchestrator | 2026-01-01 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:45.440845 | orchestrator | 2026-01-01 00:49:45 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:49:45.443013 | orchestrator | 2026-01-01 00:49:45 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:49:45.444991 | orchestrator | 2026-01-01 00:49:45 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:49:45.448459 | orchestrator | 2026-01-01 00:49:45 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:49:45.453080 | orchestrator | 2026-01-01 00:49:45 | INFO  | Task 5840b1f7-efb9-4d8b-9586-67be9d6d5658 is in state STARTED 2026-01-01 00:49:45.458805 | orchestrator | 2026-01-01 00:49:45 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:49:45.464793 | orchestrator | 2026-01-01 00:49:45 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:49:45.464838 | orchestrator | 2026-01-01 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:48.703638 | orchestrator | 2026-01-01 00:49:48 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:49:48.703712 | orchestrator | 2026-01-01 00:49:48 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:49:48.704588 | orchestrator | 2026-01-01 00:49:48 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:49:48.705274 | orchestrator | 2026-01-01 00:49:48 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:49:48.706270 | orchestrator | 2026-01-01 00:49:48 | INFO  | Task 5840b1f7-efb9-4d8b-9586-67be9d6d5658 is in state STARTED 2026-01-01 00:49:48.706699 | orchestrator | 2026-01-01 00:49:48 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:49:48.707984 | orchestrator | 2026-01-01 00:49:48 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:49:48.708012 | orchestrator | 2026-01-01 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:51.793417 | orchestrator | 2026-01-01 00:49:51 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:49:51.793574 | orchestrator | 2026-01-01 00:49:51 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:49:51.795114 | orchestrator | 2026-01-01 00:49:51 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:49:51.797078 | orchestrator | 2026-01-01 00:49:51 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:49:51.800776 | orchestrator | 2026-01-01 00:49:51 | INFO  | Task 5840b1f7-efb9-4d8b-9586-67be9d6d5658 is in state STARTED 2026-01-01 00:49:51.802960 | orchestrator | 2026-01-01 00:49:51 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:49:51.804987 | orchestrator | 2026-01-01 00:49:51 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:49:51.805044 | orchestrator | 2026-01-01 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:54.931141 | orchestrator | 2026-01-01 00:49:54 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:49:54.934682 | orchestrator | 2026-01-01 00:49:54 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:49:54.934732 | orchestrator | 2026-01-01 00:49:54 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:49:54.937796 | orchestrator | 2026-01-01 00:49:54 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:49:54.942157 | orchestrator | 2026-01-01 00:49:54 | INFO  | Task 5840b1f7-efb9-4d8b-9586-67be9d6d5658 is in state STARTED 2026-01-01 00:49:54.944343 | orchestrator | 2026-01-01 00:49:54 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:49:54.945068 | orchestrator | 2026-01-01 00:49:54 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:49:54.945124 | orchestrator | 2026-01-01 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:58.100891 | orchestrator | 2026-01-01 00:49:58.100993 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-01 00:49:58.101006 | orchestrator | 2026-01-01 00:49:58.101016 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-01 00:49:58.101026 | orchestrator | Thursday 01 January 2026 00:49:39 +0000 (0:00:01.664) 0:00:01.664 ****** 2026-01-01 00:49:58.101035 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:49:58.101044 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:49:58.101053 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:49:58.101062 | orchestrator | changed: [testbed-manager] 2026-01-01 00:49:58.101071 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:49:58.101080 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:49:58.101088 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:49:58.101097 | orchestrator | 2026-01-01 00:49:58.101106 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-01 00:49:58.101115 | orchestrator | Thursday 01 January 2026 00:49:44 +0000 (0:00:05.718) 0:00:07.382 ****** 2026-01-01 00:49:58.101124 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-01 00:49:58.101133 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-01 00:49:58.101142 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-01 00:49:58.101151 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-01 00:49:58.101160 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-01 00:49:58.101168 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-01 00:49:58.101177 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-01 00:49:58.101186 | orchestrator | 2026-01-01 00:49:58.101195 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-01 00:49:58.101205 | orchestrator | Thursday 01 January 2026 00:49:46 +0000 (0:00:02.164) 0:00:09.547 ****** 2026-01-01 00:49:58.101249 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:49:46.110654', 'end': '2026-01-01 00:49:46.118118', 'delta': '0:00:00.007464', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:49:58.101542 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:49:46.046067', 'end': '2026-01-01 00:49:46.052166', 'delta': '0:00:00.006099', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:49:58.101561 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:49:46.502930', 'end': '2026-01-01 00:49:46.509357', 'delta': '0:00:00.006427', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:49:58.101638 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:49:46.300664', 'end': '2026-01-01 00:49:46.309323', 'delta': '0:00:00.008659', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:49:58.101656 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:49:46.561088', 'end': '2026-01-01 00:49:46.567354', 'delta': '0:00:00.006266', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:49:58.101667 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:49:46.727662', 'end': '2026-01-01 00:49:46.736775', 'delta': '0:00:00.009113', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:49:58.101676 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:49:46.766126', 'end': '2026-01-01 00:49:46.771837', 'delta': '0:00:00.005711', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:49:58.101685 | orchestrator | 2026-01-01 00:49:58.101702 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-01 00:49:58.101711 | orchestrator | Thursday 01 January 2026 00:49:49 +0000 (0:00:02.624) 0:00:12.172 ****** 2026-01-01 00:49:58.101720 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-01 00:49:58.101730 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-01 00:49:58.101739 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-01 00:49:58.101774 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-01 00:49:58.101783 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-01 00:49:58.101792 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-01 00:49:58.101801 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-01 00:49:58.101809 | orchestrator | 2026-01-01 00:49:58.101818 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-01 00:49:58.101827 | orchestrator | Thursday 01 January 2026 00:49:52 +0000 (0:00:03.024) 0:00:15.196 ****** 2026-01-01 00:49:58.101835 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-01 00:49:58.101844 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-01 00:49:58.101853 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-01 00:49:58.101861 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-01 00:49:58.101870 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-01 00:49:58.101878 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-01 00:49:58.101887 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-01 00:49:58.101896 | orchestrator | 2026-01-01 00:49:58.101905 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:49:58.101921 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:49:58.101932 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:49:58.101941 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:49:58.101950 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:49:58.101959 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:49:58.101972 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:49:58.101981 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:49:58.101990 | orchestrator | 2026-01-01 00:49:58.101998 | orchestrator | 2026-01-01 00:49:58.102007 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:49:58.102064 | orchestrator | Thursday 01 January 2026 00:49:56 +0000 (0:00:03.655) 0:00:18.852 ****** 2026-01-01 00:49:58.102076 | orchestrator | =============================================================================== 2026-01-01 00:49:58.102084 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.72s 2026-01-01 00:49:58.102093 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.66s 2026-01-01 00:49:58.102102 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.02s 2026-01-01 00:49:58.102111 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.62s 2026-01-01 00:49:58.102120 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.16s 2026-01-01 00:49:58.102129 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:49:58.102144 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:49:58.102153 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:49:58.102162 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:49:58.102171 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task 5840b1f7-efb9-4d8b-9586-67be9d6d5658 is in state SUCCESS 2026-01-01 00:49:58.102179 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:49:58.102188 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:49:58.102197 | orchestrator | 2026-01-01 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:01.552808 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:01.553013 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:01.553045 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:50:01.553066 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:50:01.553086 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:01.553098 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:01.557499 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:01.557567 | orchestrator | 2026-01-01 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:04.938558 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:04.938696 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:04.938724 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:50:04.938792 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:50:04.938812 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:04.939837 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:04.941547 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:04.941604 | orchestrator | 2026-01-01 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:07.990271 | orchestrator | 2026-01-01 00:50:07 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:07.990376 | orchestrator | 2026-01-01 00:50:07 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:07.990387 | orchestrator | 2026-01-01 00:50:07 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:50:07.990414 | orchestrator | 2026-01-01 00:50:07 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:50:07.990422 | orchestrator | 2026-01-01 00:50:07 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:07.990449 | orchestrator | 2026-01-01 00:50:07 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:07.990456 | orchestrator | 2026-01-01 00:50:07 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:07.990464 | orchestrator | 2026-01-01 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:11.040181 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:11.042106 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:11.043335 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:50:11.045058 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:50:11.046484 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:11.048141 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:11.049614 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:11.049678 | orchestrator | 2026-01-01 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:14.162717 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:14.162874 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:14.406089 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:50:14.406193 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:50:14.406208 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:14.406220 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:14.406232 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:14.406245 | orchestrator | 2026-01-01 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:17.246364 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:17.259218 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:17.270225 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:50:17.270288 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:50:17.595225 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:17.595308 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:17.595318 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:17.595327 | orchestrator | 2026-01-01 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:20.812386 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:20.812519 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:20.812535 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:50:20.812546 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:50:20.812558 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:20.812601 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:20.812614 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:20.812625 | orchestrator | 2026-01-01 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:23.613987 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:23.614197 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:23.614211 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state STARTED 2026-01-01 00:50:23.614221 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:50:23.614229 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:23.614237 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:23.614245 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:23.614254 | orchestrator | 2026-01-01 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:26.781310 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:26.781403 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:26.781610 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task 98089031-4d60-4c59-b801-95a60f726c10 is in state SUCCESS 2026-01-01 00:50:26.784888 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:50:26.785957 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:26.787103 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:26.789336 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:26.789477 | orchestrator | 2026-01-01 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:29.892433 | orchestrator | 2026-01-01 00:50:29 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:29.892510 | orchestrator | 2026-01-01 00:50:29 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:29.892517 | orchestrator | 2026-01-01 00:50:29 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state STARTED 2026-01-01 00:50:29.892978 | orchestrator | 2026-01-01 00:50:29 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:29.893581 | orchestrator | 2026-01-01 00:50:29 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:29.895114 | orchestrator | 2026-01-01 00:50:29 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:29.895127 | orchestrator | 2026-01-01 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:33.045252 | orchestrator | 2026-01-01 00:50:33 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:33.045476 | orchestrator | 2026-01-01 00:50:33 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:33.046452 | orchestrator | 2026-01-01 00:50:33 | INFO  | Task 6b0d10e7-ec03-464a-8165-759ef19ea54c is in state SUCCESS 2026-01-01 00:50:33.047522 | orchestrator | 2026-01-01 00:50:33 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:33.048787 | orchestrator | 2026-01-01 00:50:33 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:33.079591 | orchestrator | 2026-01-01 00:50:33 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:33.079691 | orchestrator | 2026-01-01 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:36.092624 | orchestrator | 2026-01-01 00:50:36 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:36.092727 | orchestrator | 2026-01-01 00:50:36 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:36.095477 | orchestrator | 2026-01-01 00:50:36 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:36.097012 | orchestrator | 2026-01-01 00:50:36 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:36.099524 | orchestrator | 2026-01-01 00:50:36 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:36.101146 | orchestrator | 2026-01-01 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:39.171861 | orchestrator | 2026-01-01 00:50:39 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:39.174827 | orchestrator | 2026-01-01 00:50:39 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:39.183239 | orchestrator | 2026-01-01 00:50:39 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:39.183313 | orchestrator | 2026-01-01 00:50:39 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:39.183327 | orchestrator | 2026-01-01 00:50:39 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:39.183339 | orchestrator | 2026-01-01 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:42.225584 | orchestrator | 2026-01-01 00:50:42 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:42.225722 | orchestrator | 2026-01-01 00:50:42 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:42.225781 | orchestrator | 2026-01-01 00:50:42 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:42.226881 | orchestrator | 2026-01-01 00:50:42 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:42.228458 | orchestrator | 2026-01-01 00:50:42 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:42.228547 | orchestrator | 2026-01-01 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:45.284289 | orchestrator | 2026-01-01 00:50:45 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:45.286894 | orchestrator | 2026-01-01 00:50:45 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:45.288959 | orchestrator | 2026-01-01 00:50:45 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:45.291631 | orchestrator | 2026-01-01 00:50:45 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:45.294263 | orchestrator | 2026-01-01 00:50:45 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:45.294620 | orchestrator | 2026-01-01 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:48.377497 | orchestrator | 2026-01-01 00:50:48 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:48.377727 | orchestrator | 2026-01-01 00:50:48 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:48.379441 | orchestrator | 2026-01-01 00:50:48 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:48.381101 | orchestrator | 2026-01-01 00:50:48 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:48.382819 | orchestrator | 2026-01-01 00:50:48 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:48.382848 | orchestrator | 2026-01-01 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:51.476904 | orchestrator | 2026-01-01 00:50:51 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:51.477104 | orchestrator | 2026-01-01 00:50:51 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:51.477134 | orchestrator | 2026-01-01 00:50:51 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:51.478138 | orchestrator | 2026-01-01 00:50:51 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:51.479698 | orchestrator | 2026-01-01 00:50:51 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:51.479929 | orchestrator | 2026-01-01 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:54.549202 | orchestrator | 2026-01-01 00:50:54 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:54.554520 | orchestrator | 2026-01-01 00:50:54 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:54.558798 | orchestrator | 2026-01-01 00:50:54 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:54.560946 | orchestrator | 2026-01-01 00:50:54 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:54.562301 | orchestrator | 2026-01-01 00:50:54 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:54.562957 | orchestrator | 2026-01-01 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:57.698435 | orchestrator | 2026-01-01 00:50:57 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:50:57.699436 | orchestrator | 2026-01-01 00:50:57 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:50:57.701558 | orchestrator | 2026-01-01 00:50:57 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:50:57.705226 | orchestrator | 2026-01-01 00:50:57 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:50:57.709618 | orchestrator | 2026-01-01 00:50:57 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:50:57.709701 | orchestrator | 2026-01-01 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:00.826546 | orchestrator | 2026-01-01 00:51:00 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:51:00.829876 | orchestrator | 2026-01-01 00:51:00 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:00.839103 | orchestrator | 2026-01-01 00:51:00 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:00.849538 | orchestrator | 2026-01-01 00:51:00 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:51:00.859014 | orchestrator | 2026-01-01 00:51:00 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:00.859096 | orchestrator | 2026-01-01 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:04.187565 | orchestrator | 2026-01-01 00:51:04 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:51:04.187650 | orchestrator | 2026-01-01 00:51:04 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:04.187660 | orchestrator | 2026-01-01 00:51:04 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:04.187668 | orchestrator | 2026-01-01 00:51:04 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:51:04.187675 | orchestrator | 2026-01-01 00:51:04 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:04.187683 | orchestrator | 2026-01-01 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:07.165146 | orchestrator | 2026-01-01 00:51:07 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:51:07.165395 | orchestrator | 2026-01-01 00:51:07 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:07.167873 | orchestrator | 2026-01-01 00:51:07 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:07.168535 | orchestrator | 2026-01-01 00:51:07 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:51:07.170336 | orchestrator | 2026-01-01 00:51:07 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:07.170360 | orchestrator | 2026-01-01 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:10.262349 | orchestrator | 2026-01-01 00:51:10 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state STARTED 2026-01-01 00:51:10.262521 | orchestrator | 2026-01-01 00:51:10 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:10.263965 | orchestrator | 2026-01-01 00:51:10 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:10.265020 | orchestrator | 2026-01-01 00:51:10 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:51:10.266110 | orchestrator | 2026-01-01 00:51:10 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:10.266245 | orchestrator | 2026-01-01 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:13.322215 | orchestrator | 2026-01-01 00:51:13.322349 | orchestrator | 2026-01-01 00:51:13.322373 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-01 00:51:13.322394 | orchestrator | 2026-01-01 00:51:13.322413 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-01 00:51:13.322454 | orchestrator | Thursday 01 January 2026 00:49:39 +0000 (0:00:01.311) 0:00:01.311 ****** 2026-01-01 00:51:13.322474 | orchestrator | ok: [testbed-manager] => { 2026-01-01 00:51:13.322526 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-01 00:51:13.322549 | orchestrator | } 2026-01-01 00:51:13.322566 | orchestrator | 2026-01-01 00:51:13.322584 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-01 00:51:13.322602 | orchestrator | Thursday 01 January 2026 00:49:39 +0000 (0:00:00.574) 0:00:01.886 ****** 2026-01-01 00:51:13.322618 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:13.322675 | orchestrator | 2026-01-01 00:51:13.322696 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-01 00:51:13.322714 | orchestrator | Thursday 01 January 2026 00:49:41 +0000 (0:00:01.605) 0:00:03.491 ****** 2026-01-01 00:51:13.322733 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-01 00:51:13.322789 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-01 00:51:13.322808 | orchestrator | 2026-01-01 00:51:13.322827 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-01 00:51:13.322844 | orchestrator | Thursday 01 January 2026 00:49:43 +0000 (0:00:02.171) 0:00:05.662 ****** 2026-01-01 00:51:13.322862 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:13.322880 | orchestrator | 2026-01-01 00:51:13.322897 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-01 00:51:13.322916 | orchestrator | Thursday 01 January 2026 00:49:46 +0000 (0:00:03.113) 0:00:08.775 ****** 2026-01-01 00:51:13.322936 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:13.322954 | orchestrator | 2026-01-01 00:51:13.322970 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-01 00:51:13.322982 | orchestrator | Thursday 01 January 2026 00:49:48 +0000 (0:00:01.757) 0:00:10.532 ****** 2026-01-01 00:51:13.322993 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-01 00:51:13.323003 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:13.323014 | orchestrator | 2026-01-01 00:51:13.323025 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-01 00:51:13.323036 | orchestrator | Thursday 01 January 2026 00:50:19 +0000 (0:00:31.176) 0:00:41.709 ****** 2026-01-01 00:51:13.323046 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:13.323057 | orchestrator | 2026-01-01 00:51:13.323068 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:51:13.323080 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:13.323092 | orchestrator | 2026-01-01 00:51:13.323103 | orchestrator | 2026-01-01 00:51:13.323114 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:51:13.323125 | orchestrator | Thursday 01 January 2026 00:50:23 +0000 (0:00:04.181) 0:00:45.890 ****** 2026-01-01 00:51:13.323135 | orchestrator | =============================================================================== 2026-01-01 00:51:13.323146 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 31.18s 2026-01-01 00:51:13.323157 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.18s 2026-01-01 00:51:13.323167 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.11s 2026-01-01 00:51:13.323178 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.17s 2026-01-01 00:51:13.323189 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.76s 2026-01-01 00:51:13.323200 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.61s 2026-01-01 00:51:13.323211 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.57s 2026-01-01 00:51:13.323221 | orchestrator | 2026-01-01 00:51:13.323232 | orchestrator | 2026-01-01 00:51:13.323243 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-01 00:51:13.323254 | orchestrator | 2026-01-01 00:51:13.323265 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-01 00:51:13.323289 | orchestrator | Thursday 01 January 2026 00:49:38 +0000 (0:00:00.754) 0:00:00.754 ****** 2026-01-01 00:51:13.323301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-01 00:51:13.323313 | orchestrator | 2026-01-01 00:51:13.323324 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-01 00:51:13.323335 | orchestrator | Thursday 01 January 2026 00:49:39 +0000 (0:00:00.966) 0:00:01.721 ****** 2026-01-01 00:51:13.323346 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-01 00:51:13.323356 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-01 00:51:13.323367 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-01 00:51:13.323378 | orchestrator | 2026-01-01 00:51:13.323469 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-01 00:51:13.323491 | orchestrator | Thursday 01 January 2026 00:49:41 +0000 (0:00:02.190) 0:00:03.911 ****** 2026-01-01 00:51:13.323502 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:13.323512 | orchestrator | 2026-01-01 00:51:13.323521 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-01 00:51:13.323531 | orchestrator | Thursday 01 January 2026 00:49:43 +0000 (0:00:02.026) 0:00:05.938 ****** 2026-01-01 00:51:13.323565 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-01 00:51:13.323576 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:13.323585 | orchestrator | 2026-01-01 00:51:13.323595 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-01 00:51:13.323613 | orchestrator | Thursday 01 January 2026 00:50:19 +0000 (0:00:35.911) 0:00:41.850 ****** 2026-01-01 00:51:13.323623 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:13.323633 | orchestrator | 2026-01-01 00:51:13.323642 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-01 00:51:13.323652 | orchestrator | Thursday 01 January 2026 00:50:21 +0000 (0:00:02.663) 0:00:44.513 ****** 2026-01-01 00:51:13.323664 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:13.323681 | orchestrator | 2026-01-01 00:51:13.323697 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-01 00:51:13.323713 | orchestrator | Thursday 01 January 2026 00:50:24 +0000 (0:00:02.660) 0:00:47.174 ****** 2026-01-01 00:51:13.323729 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:13.323774 | orchestrator | 2026-01-01 00:51:13.323791 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-01 00:51:13.323807 | orchestrator | Thursday 01 January 2026 00:50:29 +0000 (0:00:04.699) 0:00:51.873 ****** 2026-01-01 00:51:13.323823 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:13.323841 | orchestrator | 2026-01-01 00:51:13.323856 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-01 00:51:13.323871 | orchestrator | Thursday 01 January 2026 00:50:30 +0000 (0:00:01.243) 0:00:53.117 ****** 2026-01-01 00:51:13.323886 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:13.323901 | orchestrator | 2026-01-01 00:51:13.323917 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-01 00:51:13.323934 | orchestrator | Thursday 01 January 2026 00:50:32 +0000 (0:00:01.448) 0:00:54.565 ****** 2026-01-01 00:51:13.323950 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:13.323965 | orchestrator | 2026-01-01 00:51:13.323980 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:51:13.323996 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:13.324014 | orchestrator | 2026-01-01 00:51:13.324031 | orchestrator | 2026-01-01 00:51:13.324047 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:51:13.324065 | orchestrator | Thursday 01 January 2026 00:50:32 +0000 (0:00:00.392) 0:00:54.957 ****** 2026-01-01 00:51:13.324094 | orchestrator | =============================================================================== 2026-01-01 00:51:13.324105 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.91s 2026-01-01 00:51:13.324115 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.70s 2026-01-01 00:51:13.324125 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.66s 2026-01-01 00:51:13.324134 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.66s 2026-01-01 00:51:13.324144 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.19s 2026-01-01 00:51:13.324154 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.03s 2026-01-01 00:51:13.324164 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.45s 2026-01-01 00:51:13.324173 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.24s 2026-01-01 00:51:13.324183 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.97s 2026-01-01 00:51:13.324193 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.39s 2026-01-01 00:51:13.324202 | orchestrator | 2026-01-01 00:51:13.324212 | orchestrator | 2026-01-01 00:51:13.324222 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-01 00:51:13.324231 | orchestrator | 2026-01-01 00:51:13.324241 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-01 00:51:13.324251 | orchestrator | Thursday 01 January 2026 00:50:03 +0000 (0:00:00.277) 0:00:00.277 ****** 2026-01-01 00:51:13.324260 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:13.324270 | orchestrator | 2026-01-01 00:51:13.324280 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-01 00:51:13.324290 | orchestrator | Thursday 01 January 2026 00:50:05 +0000 (0:00:01.797) 0:00:02.074 ****** 2026-01-01 00:51:13.324299 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-01 00:51:13.324309 | orchestrator | 2026-01-01 00:51:13.324319 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-01 00:51:13.324328 | orchestrator | Thursday 01 January 2026 00:50:06 +0000 (0:00:00.736) 0:00:02.810 ****** 2026-01-01 00:51:13.324338 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:13.324348 | orchestrator | 2026-01-01 00:51:13.324357 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-01 00:51:13.324367 | orchestrator | Thursday 01 January 2026 00:50:07 +0000 (0:00:01.613) 0:00:04.424 ****** 2026-01-01 00:51:13.324377 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-01 00:51:13.324386 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:13.324396 | orchestrator | 2026-01-01 00:51:13.324406 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-01 00:51:13.324415 | orchestrator | Thursday 01 January 2026 00:51:03 +0000 (0:00:56.241) 0:01:00.665 ****** 2026-01-01 00:51:13.324425 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:13.324435 | orchestrator | 2026-01-01 00:51:13.324444 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:51:13.324454 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:13.324464 | orchestrator | 2026-01-01 00:51:13.324474 | orchestrator | 2026-01-01 00:51:13.324483 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:51:13.324503 | orchestrator | Thursday 01 January 2026 00:51:12 +0000 (0:00:08.741) 0:01:09.407 ****** 2026-01-01 00:51:13.324513 | orchestrator | =============================================================================== 2026-01-01 00:51:13.324530 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 56.24s 2026-01-01 00:51:13.324540 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 8.74s 2026-01-01 00:51:13.324556 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.80s 2026-01-01 00:51:13.324566 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.61s 2026-01-01 00:51:13.324576 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.74s 2026-01-01 00:51:13.324585 | orchestrator | 2026-01-01 00:51:13 | INFO  | Task c454c6d8-ec61-47a1-ad36-67752f37bfc6 is in state SUCCESS 2026-01-01 00:51:13.324704 | orchestrator | 2026-01-01 00:51:13 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:13.327364 | orchestrator | 2026-01-01 00:51:13 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:13.329912 | orchestrator | 2026-01-01 00:51:13 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:51:13.332131 | orchestrator | 2026-01-01 00:51:13 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:13.332188 | orchestrator | 2026-01-01 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:16.411249 | orchestrator | 2026-01-01 00:51:16 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:16.421521 | orchestrator | 2026-01-01 00:51:16 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:16.430399 | orchestrator | 2026-01-01 00:51:16 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state STARTED 2026-01-01 00:51:16.434376 | orchestrator | 2026-01-01 00:51:16 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:16.434449 | orchestrator | 2026-01-01 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:19.490954 | orchestrator | 2026-01-01 00:51:19 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:19.493254 | orchestrator | 2026-01-01 00:51:19 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:19.495251 | orchestrator | 2026-01-01 00:51:19 | INFO  | Task 4cd3b3b4-0ff0-4e9f-9e66-a1994eb47628 is in state SUCCESS 2026-01-01 00:51:19.496276 | orchestrator | 2026-01-01 00:51:19.496329 | orchestrator | 2026-01-01 00:51:19.496349 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:51:19.496367 | orchestrator | 2026-01-01 00:51:19.496384 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:51:19.496402 | orchestrator | Thursday 01 January 2026 00:49:34 +0000 (0:00:00.958) 0:00:00.958 ****** 2026-01-01 00:51:19.496420 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-01 00:51:19.496438 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-01 00:51:19.496455 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-01 00:51:19.496472 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-01 00:51:19.496489 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-01 00:51:19.496506 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-01 00:51:19.496524 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-01 00:51:19.496542 | orchestrator | 2026-01-01 00:51:19.496560 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-01 00:51:19.496578 | orchestrator | 2026-01-01 00:51:19.496597 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-01 00:51:19.496617 | orchestrator | Thursday 01 January 2026 00:49:38 +0000 (0:00:03.946) 0:00:04.905 ****** 2026-01-01 00:51:19.496661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:51:19.496706 | orchestrator | 2026-01-01 00:51:19.496719 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-01 00:51:19.496730 | orchestrator | Thursday 01 January 2026 00:49:41 +0000 (0:00:03.121) 0:00:08.026 ****** 2026-01-01 00:51:19.496767 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:19.496780 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:51:19.496790 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:51:19.496801 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:51:19.496812 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:51:19.496823 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:51:19.496834 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:51:19.496845 | orchestrator | 2026-01-01 00:51:19.496856 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-01 00:51:19.496867 | orchestrator | Thursday 01 January 2026 00:49:43 +0000 (0:00:02.401) 0:00:10.428 ****** 2026-01-01 00:51:19.496878 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:19.496889 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:51:19.496899 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:51:19.496910 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:51:19.496921 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:51:19.496932 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:51:19.496943 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:51:19.496953 | orchestrator | 2026-01-01 00:51:19.496964 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-01 00:51:19.496985 | orchestrator | Thursday 01 January 2026 00:49:47 +0000 (0:00:03.911) 0:00:14.340 ****** 2026-01-01 00:51:19.496996 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:19.497007 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:51:19.497018 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:51:19.497045 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:51:19.497056 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:51:19.497067 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:51:19.497083 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:51:19.497101 | orchestrator | 2026-01-01 00:51:19.497121 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-01 00:51:19.497138 | orchestrator | Thursday 01 January 2026 00:49:51 +0000 (0:00:03.703) 0:00:18.043 ****** 2026-01-01 00:51:19.497155 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:51:19.497173 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:51:19.497191 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:51:19.497209 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:51:19.497227 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:51:19.497245 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:51:19.497262 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:19.497281 | orchestrator | 2026-01-01 00:51:19.497300 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-01 00:51:19.497318 | orchestrator | Thursday 01 January 2026 00:50:07 +0000 (0:00:15.929) 0:00:33.973 ****** 2026-01-01 00:51:19.497336 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:51:19.497356 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:51:19.497374 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:51:19.497394 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:51:19.497405 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:51:19.497416 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:51:19.497427 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:19.497438 | orchestrator | 2026-01-01 00:51:19.497449 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-01 00:51:19.497460 | orchestrator | Thursday 01 January 2026 00:50:52 +0000 (0:00:44.797) 0:01:18.770 ****** 2026-01-01 00:51:19.497472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:51:19.497486 | orchestrator | 2026-01-01 00:51:19.497496 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-01 00:51:19.497521 | orchestrator | Thursday 01 January 2026 00:50:54 +0000 (0:00:01.765) 0:01:20.535 ****** 2026-01-01 00:51:19.497532 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-01 00:51:19.497544 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-01 00:51:19.497555 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-01 00:51:19.497566 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-01 00:51:19.497595 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-01 00:51:19.497606 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-01 00:51:19.497617 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-01 00:51:19.497628 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-01 00:51:19.497639 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-01 00:51:19.497650 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-01 00:51:19.497661 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-01 00:51:19.497672 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-01 00:51:19.497682 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-01 00:51:19.497693 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-01 00:51:19.497704 | orchestrator | 2026-01-01 00:51:19.497715 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-01 00:51:19.497727 | orchestrator | Thursday 01 January 2026 00:51:02 +0000 (0:00:08.415) 0:01:28.951 ****** 2026-01-01 00:51:19.497763 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:19.497774 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:51:19.497785 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:51:19.497796 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:51:19.497807 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:51:19.497817 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:51:19.497828 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:51:19.497839 | orchestrator | 2026-01-01 00:51:19.497850 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-01 00:51:19.497862 | orchestrator | Thursday 01 January 2026 00:51:04 +0000 (0:00:01.606) 0:01:30.557 ****** 2026-01-01 00:51:19.497873 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:51:19.497884 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:51:19.497895 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:19.497906 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:51:19.497916 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:51:19.497927 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:51:19.497938 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:51:19.497949 | orchestrator | 2026-01-01 00:51:19.497960 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-01 00:51:19.497971 | orchestrator | Thursday 01 January 2026 00:51:06 +0000 (0:00:02.236) 0:01:32.794 ****** 2026-01-01 00:51:19.497982 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:51:19.497993 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:51:19.498004 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:19.498015 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:51:19.498078 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:51:19.498089 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:51:19.498100 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:51:19.498111 | orchestrator | 2026-01-01 00:51:19.498122 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-01 00:51:19.498133 | orchestrator | Thursday 01 January 2026 00:51:08 +0000 (0:00:01.800) 0:01:34.594 ****** 2026-01-01 00:51:19.498144 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:19.498155 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:51:19.498166 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:51:19.498177 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:51:19.498187 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:51:19.498198 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:51:19.498223 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:51:19.498234 | orchestrator | 2026-01-01 00:51:19.498245 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-01 00:51:19.498257 | orchestrator | Thursday 01 January 2026 00:51:10 +0000 (0:00:02.265) 0:01:36.859 ****** 2026-01-01 00:51:19.498268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-01 00:51:19.498282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:51:19.498293 | orchestrator | 2026-01-01 00:51:19.498305 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-01 00:51:19.498315 | orchestrator | Thursday 01 January 2026 00:51:12 +0000 (0:00:01.654) 0:01:38.514 ****** 2026-01-01 00:51:19.498326 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:19.498337 | orchestrator | 2026-01-01 00:51:19.498348 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-01 00:51:19.498359 | orchestrator | Thursday 01 January 2026 00:51:14 +0000 (0:00:02.381) 0:01:40.895 ****** 2026-01-01 00:51:19.498370 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:51:19.498381 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:19.498398 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:51:19.498416 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:51:19.498435 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:51:19.498455 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:51:19.498474 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:51:19.498492 | orchestrator | 2026-01-01 00:51:19.498510 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:51:19.498530 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:19.498552 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:19.498571 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:19.498786 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:19.498830 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:19.498850 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:19.498868 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:19.498884 | orchestrator | 2026-01-01 00:51:19.498895 | orchestrator | 2026-01-01 00:51:19.498906 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:51:19.498918 | orchestrator | Thursday 01 January 2026 00:51:17 +0000 (0:00:03.399) 0:01:44.295 ****** 2026-01-01 00:51:19.498928 | orchestrator | =============================================================================== 2026-01-01 00:51:19.498939 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 44.80s 2026-01-01 00:51:19.498950 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.93s 2026-01-01 00:51:19.498961 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 8.42s 2026-01-01 00:51:19.498972 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.95s 2026-01-01 00:51:19.498983 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.91s 2026-01-01 00:51:19.499006 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.70s 2026-01-01 00:51:19.499020 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.40s 2026-01-01 00:51:19.499038 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.12s 2026-01-01 00:51:19.499056 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.40s 2026-01-01 00:51:19.499074 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.38s 2026-01-01 00:51:19.499093 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.27s 2026-01-01 00:51:19.499111 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.24s 2026-01-01 00:51:19.499127 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.80s 2026-01-01 00:51:19.499138 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.77s 2026-01-01 00:51:19.499149 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.65s 2026-01-01 00:51:19.499160 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.61s 2026-01-01 00:51:19.499172 | orchestrator | 2026-01-01 00:51:19 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:19.499183 | orchestrator | 2026-01-01 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:22.545357 | orchestrator | 2026-01-01 00:51:22 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:22.549479 | orchestrator | 2026-01-01 00:51:22 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:22.549559 | orchestrator | 2026-01-01 00:51:22 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:22.549584 | orchestrator | 2026-01-01 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:25.598516 | orchestrator | 2026-01-01 00:51:25 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:25.598652 | orchestrator | 2026-01-01 00:51:25 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:25.599328 | orchestrator | 2026-01-01 00:51:25 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:25.599353 | orchestrator | 2026-01-01 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:28.649340 | orchestrator | 2026-01-01 00:51:28 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:28.653381 | orchestrator | 2026-01-01 00:51:28 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:28.656256 | orchestrator | 2026-01-01 00:51:28 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:28.656317 | orchestrator | 2026-01-01 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:31.697255 | orchestrator | 2026-01-01 00:51:31 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:31.699249 | orchestrator | 2026-01-01 00:51:31 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:31.702269 | orchestrator | 2026-01-01 00:51:31 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:31.704490 | orchestrator | 2026-01-01 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:34.753065 | orchestrator | 2026-01-01 00:51:34 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:34.753978 | orchestrator | 2026-01-01 00:51:34 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:34.756132 | orchestrator | 2026-01-01 00:51:34 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:34.756206 | orchestrator | 2026-01-01 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:37.821929 | orchestrator | 2026-01-01 00:51:37 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:37.825014 | orchestrator | 2026-01-01 00:51:37 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:37.826908 | orchestrator | 2026-01-01 00:51:37 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:37.826951 | orchestrator | 2026-01-01 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:40.876914 | orchestrator | 2026-01-01 00:51:40 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:40.878792 | orchestrator | 2026-01-01 00:51:40 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:40.878897 | orchestrator | 2026-01-01 00:51:40 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:40.878925 | orchestrator | 2026-01-01 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:43.937472 | orchestrator | 2026-01-01 00:51:43 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:43.937678 | orchestrator | 2026-01-01 00:51:43 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:43.941087 | orchestrator | 2026-01-01 00:51:43 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:43.941713 | orchestrator | 2026-01-01 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:47.020084 | orchestrator | 2026-01-01 00:51:47 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:47.022633 | orchestrator | 2026-01-01 00:51:47 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:47.028322 | orchestrator | 2026-01-01 00:51:47 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:47.028403 | orchestrator | 2026-01-01 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:50.108799 | orchestrator | 2026-01-01 00:51:50 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:50.111437 | orchestrator | 2026-01-01 00:51:50 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:50.113641 | orchestrator | 2026-01-01 00:51:50 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:50.113708 | orchestrator | 2026-01-01 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:53.181148 | orchestrator | 2026-01-01 00:51:53 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:53.182790 | orchestrator | 2026-01-01 00:51:53 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:53.184855 | orchestrator | 2026-01-01 00:51:53 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:53.184911 | orchestrator | 2026-01-01 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:56.233557 | orchestrator | 2026-01-01 00:51:56 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:56.235591 | orchestrator | 2026-01-01 00:51:56 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:56.237985 | orchestrator | 2026-01-01 00:51:56 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:56.238160 | orchestrator | 2026-01-01 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:59.277072 | orchestrator | 2026-01-01 00:51:59 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:51:59.278251 | orchestrator | 2026-01-01 00:51:59 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:51:59.280186 | orchestrator | 2026-01-01 00:51:59 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:51:59.280231 | orchestrator | 2026-01-01 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:02.334685 | orchestrator | 2026-01-01 00:52:02 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:02.335763 | orchestrator | 2026-01-01 00:52:02 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:52:02.337134 | orchestrator | 2026-01-01 00:52:02 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:02.337161 | orchestrator | 2026-01-01 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:05.380394 | orchestrator | 2026-01-01 00:52:05 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:05.380966 | orchestrator | 2026-01-01 00:52:05 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:52:05.381374 | orchestrator | 2026-01-01 00:52:05 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:05.381399 | orchestrator | 2026-01-01 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:08.429170 | orchestrator | 2026-01-01 00:52:08 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:08.429279 | orchestrator | 2026-01-01 00:52:08 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:52:08.429294 | orchestrator | 2026-01-01 00:52:08 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:08.429306 | orchestrator | 2026-01-01 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:11.517516 | orchestrator | 2026-01-01 00:52:11 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:11.517622 | orchestrator | 2026-01-01 00:52:11 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:52:11.517637 | orchestrator | 2026-01-01 00:52:11 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:11.517650 | orchestrator | 2026-01-01 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:14.552594 | orchestrator | 2026-01-01 00:52:14 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:14.552800 | orchestrator | 2026-01-01 00:52:14 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:52:14.553925 | orchestrator | 2026-01-01 00:52:14 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:14.553959 | orchestrator | 2026-01-01 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:17.594379 | orchestrator | 2026-01-01 00:52:17 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:17.595522 | orchestrator | 2026-01-01 00:52:17 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state STARTED 2026-01-01 00:52:17.597576 | orchestrator | 2026-01-01 00:52:17 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:17.597809 | orchestrator | 2026-01-01 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:20.648044 | orchestrator | 2026-01-01 00:52:20 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:20.648420 | orchestrator | 2026-01-01 00:52:20 | INFO  | Task 669fa53e-01ac-47ea-b165-fe2a5b2f4c78 is in state SUCCESS 2026-01-01 00:52:20.655004 | orchestrator | 2026-01-01 00:52:20.655066 | orchestrator | 2026-01-01 00:52:20.655079 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-01 00:52:20.655090 | orchestrator | 2026-01-01 00:52:20.655100 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-01 00:52:20.655111 | orchestrator | Thursday 01 January 2026 00:49:26 +0000 (0:00:00.282) 0:00:00.282 ****** 2026-01-01 00:52:20.655122 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:52:20.655133 | orchestrator | 2026-01-01 00:52:20.655144 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-01 00:52:20.655154 | orchestrator | Thursday 01 January 2026 00:49:28 +0000 (0:00:01.235) 0:00:01.517 ****** 2026-01-01 00:52:20.655165 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:52:20.655175 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:52:20.655185 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:52:20.655195 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:52:20.655205 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:52:20.655215 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:52:20.655225 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:52:20.655235 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:52:20.655244 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:52:20.655254 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:52:20.655264 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:52:20.655274 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:52:20.655284 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:52:20.655295 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:52:20.655305 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:52:20.655315 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:52:20.655325 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:52:20.655335 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:52:20.655345 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:52:20.655355 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:52:20.655365 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:52:20.655375 | orchestrator | 2026-01-01 00:52:20.655385 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-01 00:52:20.655395 | orchestrator | Thursday 01 January 2026 00:49:32 +0000 (0:00:04.523) 0:00:06.041 ****** 2026-01-01 00:52:20.655405 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:52:20.655435 | orchestrator | 2026-01-01 00:52:20.655445 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-01 00:52:20.655455 | orchestrator | Thursday 01 January 2026 00:49:33 +0000 (0:00:01.359) 0:00:07.400 ****** 2026-01-01 00:52:20.655476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.655491 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.655521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.655532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.655545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.655556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.655568 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.655586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655628 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655800 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.655861 | orchestrator | 2026-01-01 00:52:20.655871 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-01 00:52:20.655881 | orchestrator | Thursday 01 January 2026 00:49:39 +0000 (0:00:05.377) 0:00:12.778 ****** 2026-01-01 00:52:20.655892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.655910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.655920 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.655935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.655946 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:20.655962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.655973 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.655983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.655993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.656012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656033 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:20.656043 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656053 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:52:20.656064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656074 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:20.656095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.656107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.656133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656143 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:52:20.656153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.656177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656229 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:52:20.656239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656249 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:52:20.656259 | orchestrator | 2026-01-01 00:52:20.656269 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-01 00:52:20.656279 | orchestrator | Thursday 01 January 2026 00:49:43 +0000 (0:00:04.287) 0:00:17.065 ****** 2026-01-01 00:52:20.656289 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.656305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.656316 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.656346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.656374 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.656396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656413 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:20.656421 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:52:20.656430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.656454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656463 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:52:20.656471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656493 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:52:20.656501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656509 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:20.656517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656525 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:20.656534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.656542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.656563 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:52:20.656571 | orchestrator | 2026-01-01 00:52:20.656579 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-01-01 00:52:20.656587 | orchestrator | Thursday 01 January 2026 00:49:49 +0000 (0:00:06.184) 0:00:23.250 ****** 2026-01-01 00:52:20.656595 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:52:20.656603 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:20.656611 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:20.656619 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:20.656627 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:52:20.656644 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:52:20.656652 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:52:20.656660 | orchestrator | 2026-01-01 00:52:20.656668 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-01 00:52:20.656676 | orchestrator | Thursday 01 January 2026 00:49:51 +0000 (0:00:01.810) 0:00:25.061 ****** 2026-01-01 00:52:20.656684 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:52:20.656692 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:20.656700 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:20.656708 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:20.656716 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:52:20.656724 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:52:20.656748 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:52:20.656756 | orchestrator | 2026-01-01 00:52:20.656764 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-01 00:52:20.656772 | orchestrator | Thursday 01 January 2026 00:49:53 +0000 (0:00:02.342) 0:00:27.404 ****** 2026-01-01 00:52:20.656780 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:52:20.656788 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:20.656796 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:20.656803 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:20.656811 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:52:20.656819 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:52:20.656827 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:52:20.656835 | orchestrator | 2026-01-01 00:52:20.656843 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-01-01 00:52:20.656851 | orchestrator | Thursday 01 January 2026 00:49:56 +0000 (0:00:02.139) 0:00:29.543 ****** 2026-01-01 00:52:20.656859 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:20.656867 | orchestrator | changed: [testbed-manager] 2026-01-01 00:52:20.656875 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:20.656883 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:20.656891 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:52:20.656899 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:52:20.656906 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:52:20.656914 | orchestrator | 2026-01-01 00:52:20.656922 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-01 00:52:20.656931 | orchestrator | Thursday 01 January 2026 00:50:00 +0000 (0:00:04.028) 0:00:33.572 ****** 2026-01-01 00:52:20.656939 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.656948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.656956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.656973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.656982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.656995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.657004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657046 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.657067 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657127 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657139 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657152 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657160 | orchestrator | 2026-01-01 00:52:20.657168 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-01 00:52:20.657177 | orchestrator | Thursday 01 January 2026 00:50:07 +0000 (0:00:07.411) 0:00:40.983 ****** 2026-01-01 00:52:20.657185 | orchestrator | [WARNING]: Skipped 2026-01-01 00:52:20.657194 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-01 00:52:20.657202 | orchestrator | to this access issue: 2026-01-01 00:52:20.657210 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-01 00:52:20.657218 | orchestrator | directory 2026-01-01 00:52:20.657226 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:52:20.657234 | orchestrator | 2026-01-01 00:52:20.657242 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-01 00:52:20.657250 | orchestrator | Thursday 01 January 2026 00:50:08 +0000 (0:00:01.301) 0:00:42.284 ****** 2026-01-01 00:52:20.657258 | orchestrator | [WARNING]: Skipped 2026-01-01 00:52:20.657266 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-01 00:52:20.657274 | orchestrator | to this access issue: 2026-01-01 00:52:20.657282 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-01 00:52:20.657290 | orchestrator | directory 2026-01-01 00:52:20.657298 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:52:20.657306 | orchestrator | 2026-01-01 00:52:20.657314 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-01 00:52:20.657322 | orchestrator | Thursday 01 January 2026 00:50:09 +0000 (0:00:01.133) 0:00:43.418 ****** 2026-01-01 00:52:20.657330 | orchestrator | [WARNING]: Skipped 2026-01-01 00:52:20.657338 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-01 00:52:20.657346 | orchestrator | to this access issue: 2026-01-01 00:52:20.657354 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-01 00:52:20.657362 | orchestrator | directory 2026-01-01 00:52:20.657370 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:52:20.657377 | orchestrator | 2026-01-01 00:52:20.657386 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-01 00:52:20.657393 | orchestrator | Thursday 01 January 2026 00:50:10 +0000 (0:00:00.842) 0:00:44.261 ****** 2026-01-01 00:52:20.657419 | orchestrator | [WARNING]: Skipped 2026-01-01 00:52:20.657428 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-01 00:52:20.657436 | orchestrator | to this access issue: 2026-01-01 00:52:20.657444 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-01 00:52:20.657452 | orchestrator | directory 2026-01-01 00:52:20.657460 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:52:20.657468 | orchestrator | 2026-01-01 00:52:20.657476 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-01 00:52:20.657483 | orchestrator | Thursday 01 January 2026 00:50:12 +0000 (0:00:01.641) 0:00:45.903 ****** 2026-01-01 00:52:20.657491 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:20.657499 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:20.657507 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:20.657515 | orchestrator | changed: [testbed-manager] 2026-01-01 00:52:20.657523 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:52:20.657531 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:52:20.657539 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:52:20.657547 | orchestrator | 2026-01-01 00:52:20.657555 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-01 00:52:20.657563 | orchestrator | Thursday 01 January 2026 00:50:22 +0000 (0:00:10.319) 0:00:56.222 ****** 2026-01-01 00:52:20.657571 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:52:20.657579 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:52:20.657587 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:52:20.657595 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:52:20.657603 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:52:20.657611 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:52:20.657619 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:52:20.657626 | orchestrator | 2026-01-01 00:52:20.657635 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-01 00:52:20.657643 | orchestrator | Thursday 01 January 2026 00:50:28 +0000 (0:00:05.974) 0:01:02.197 ****** 2026-01-01 00:52:20.657651 | orchestrator | changed: [testbed-manager] 2026-01-01 00:52:20.657659 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:20.657667 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:20.657674 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:20.657682 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:52:20.657690 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:52:20.657698 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:52:20.657706 | orchestrator | 2026-01-01 00:52:20.657714 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-01 00:52:20.657722 | orchestrator | Thursday 01 January 2026 00:50:32 +0000 (0:00:03.860) 0:01:06.057 ****** 2026-01-01 00:52:20.657760 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.657770 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.657783 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.657791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.657800 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657808 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.657817 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.657842 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.657866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.657875 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.657883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.657892 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657900 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.657912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.657924 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657938 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.657947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.657955 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657963 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657971 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.657979 | orchestrator | 2026-01-01 00:52:20.657988 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-01 00:52:20.657996 | orchestrator | Thursday 01 January 2026 00:50:34 +0000 (0:00:02.323) 0:01:08.381 ****** 2026-01-01 00:52:20.658004 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:52:20.658066 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:52:20.658076 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:52:20.658084 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:52:20.658092 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:52:20.658100 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:52:20.658108 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:52:20.658116 | orchestrator | 2026-01-01 00:52:20.658124 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-01 00:52:20.658136 | orchestrator | Thursday 01 January 2026 00:50:37 +0000 (0:00:03.066) 0:01:11.448 ****** 2026-01-01 00:52:20.658144 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:52:20.658160 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:52:20.658168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:52:20.658176 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:52:20.658184 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:52:20.658192 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:52:20.658200 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:52:20.658208 | orchestrator | 2026-01-01 00:52:20.658221 | orchestrator | TASK [servi2026-01-01 00:52:20 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:20.658231 | orchestrator | 2026-01-01 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:20.658452 | orchestrator | ce-check-containers : common | Check containers] ******************** 2026-01-01 00:52:20.658549 | orchestrator | Thursday 01 January 2026 00:50:40 +0000 (0:00:02.789) 0:01:14.238 ****** 2026-01-01 00:52:20.658569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.658587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.658600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.658612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.658623 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.658652 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.658698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:52:20.658764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658779 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658838 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658908 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:20.658969 | orchestrator | 2026-01-01 00:52:20.658982 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-01-01 00:52:20.658995 | orchestrator | Thursday 01 January 2026 00:50:44 +0000 (0:00:03.402) 0:01:17.640 ****** 2026-01-01 00:52:20.659009 | orchestrator | changed: [testbed-manager] => { 2026-01-01 00:52:20.659023 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:52:20.659036 | orchestrator | } 2026-01-01 00:52:20.659049 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 00:52:20.659061 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:52:20.659075 | orchestrator | } 2026-01-01 00:52:20.659093 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:52:20.659105 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:52:20.659117 | orchestrator | } 2026-01-01 00:52:20.659130 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:52:20.659143 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:52:20.659156 | orchestrator | } 2026-01-01 00:52:20.659168 | orchestrator | changed: [testbed-node-3] => { 2026-01-01 00:52:20.659181 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:52:20.659194 | orchestrator | } 2026-01-01 00:52:20.659207 | orchestrator | changed: [testbed-node-4] => { 2026-01-01 00:52:20.659219 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:52:20.659231 | orchestrator | } 2026-01-01 00:52:20.659245 | orchestrator | changed: [testbed-node-5] => { 2026-01-01 00:52:20.659257 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:52:20.659269 | orchestrator | } 2026-01-01 00:52:20.659280 | orchestrator | 2026-01-01 00:52:20.659291 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 00:52:20.659302 | orchestrator | Thursday 01 January 2026 00:50:45 +0000 (0:00:01.066) 0:01:18.706 ****** 2026-01-01 00:52:20.659321 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.659334 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659346 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.659376 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:52:20.659388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.659428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659465 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:20.659477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.659489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659518 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:20.659529 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:20.659541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.659553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659581 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:52:20.659600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.659612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659642 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:52:20.659653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:52:20.659665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:52:20.659688 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:52:20.659699 | orchestrator | 2026-01-01 00:52:20.659711 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-01 00:52:20.659722 | orchestrator | Thursday 01 January 2026 00:50:46 +0000 (0:00:01.599) 0:01:20.306 ****** 2026-01-01 00:52:20.659777 | orchestrator | changed: [testbed-manager] 2026-01-01 00:52:20.659789 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:20.659800 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:20.659811 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:20.659822 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:52:20.659833 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:52:20.659844 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:52:20.659855 | orchestrator | 2026-01-01 00:52:20.659867 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-01 00:52:20.659878 | orchestrator | Thursday 01 January 2026 00:50:48 +0000 (0:00:01.776) 0:01:22.083 ****** 2026-01-01 00:52:20.659889 | orchestrator | changed: [testbed-manager] 2026-01-01 00:52:20.659900 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:20.659912 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:20.659922 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:20.659933 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:52:20.659944 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:52:20.659955 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:52:20.659966 | orchestrator | 2026-01-01 00:52:20.659984 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:52:20.659996 | orchestrator | Thursday 01 January 2026 00:50:50 +0000 (0:00:01.677) 0:01:23.761 ****** 2026-01-01 00:52:20.660007 | orchestrator | 2026-01-01 00:52:20.660019 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:52:20.660030 | orchestrator | Thursday 01 January 2026 00:50:50 +0000 (0:00:00.090) 0:01:23.851 ****** 2026-01-01 00:52:20.660041 | orchestrator | 2026-01-01 00:52:20.660053 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:52:20.660064 | orchestrator | Thursday 01 January 2026 00:50:50 +0000 (0:00:00.096) 0:01:23.947 ****** 2026-01-01 00:52:20.660082 | orchestrator | 2026-01-01 00:52:20.660101 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:52:20.660113 | orchestrator | Thursday 01 January 2026 00:50:50 +0000 (0:00:00.066) 0:01:24.014 ****** 2026-01-01 00:52:20.660124 | orchestrator | 2026-01-01 00:52:20.660136 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:52:20.660147 | orchestrator | Thursday 01 January 2026 00:50:50 +0000 (0:00:00.281) 0:01:24.296 ****** 2026-01-01 00:52:20.660158 | orchestrator | 2026-01-01 00:52:20.660169 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:52:20.660180 | orchestrator | Thursday 01 January 2026 00:50:50 +0000 (0:00:00.085) 0:01:24.382 ****** 2026-01-01 00:52:20.660191 | orchestrator | 2026-01-01 00:52:20.660202 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:52:20.660213 | orchestrator | Thursday 01 January 2026 00:50:50 +0000 (0:00:00.085) 0:01:24.467 ****** 2026-01-01 00:52:20.660225 | orchestrator | 2026-01-01 00:52:20.660236 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-01 00:52:20.660247 | orchestrator | Thursday 01 January 2026 00:50:51 +0000 (0:00:00.104) 0:01:24.572 ****** 2026-01-01 00:52:20.660258 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:20.660269 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:20.660280 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:52:20.660291 | orchestrator | changed: [testbed-manager] 2026-01-01 00:52:20.660302 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:52:20.660313 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:52:20.660324 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:20.660335 | orchestrator | 2026-01-01 00:52:20.660347 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-01 00:52:20.660358 | orchestrator | Thursday 01 January 2026 00:51:24 +0000 (0:00:33.724) 0:01:58.297 ****** 2026-01-01 00:52:20.660369 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:20.660380 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:52:20.660392 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:20.660402 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:52:20.660413 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:20.660425 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:52:20.660435 | orchestrator | changed: [testbed-manager] 2026-01-01 00:52:20.660447 | orchestrator | 2026-01-01 00:52:20.660458 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-01 00:52:20.660469 | orchestrator | Thursday 01 January 2026 00:52:08 +0000 (0:00:43.259) 0:02:41.556 ****** 2026-01-01 00:52:20.660480 | orchestrator | ok: [testbed-manager] 2026-01-01 00:52:20.660491 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:20.660503 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:20.660513 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:20.660524 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:52:20.660536 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:52:20.660547 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:52:20.660558 | orchestrator | 2026-01-01 00:52:20.660569 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-01 00:52:20.660580 | orchestrator | Thursday 01 January 2026 00:52:10 +0000 (0:00:02.107) 0:02:43.664 ****** 2026-01-01 00:52:20.660591 | orchestrator | changed: [testbed-manager] 2026-01-01 00:52:20.660603 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:20.660613 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:20.660625 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:20.660636 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:52:20.660647 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:52:20.660658 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:52:20.660669 | orchestrator | 2026-01-01 00:52:20.660680 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:52:20.660692 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-01 00:52:20.660712 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-01 00:52:20.660723 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-01 00:52:20.660753 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-01 00:52:20.660765 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-01 00:52:20.660788 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-01 00:52:20.660799 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-01 00:52:20.660810 | orchestrator | 2026-01-01 00:52:20.660822 | orchestrator | 2026-01-01 00:52:20.660833 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:52:20.660844 | orchestrator | Thursday 01 January 2026 00:52:19 +0000 (0:00:09.466) 0:02:53.130 ****** 2026-01-01 00:52:20.660855 | orchestrator | =============================================================================== 2026-01-01 00:52:20.660866 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 43.26s 2026-01-01 00:52:20.660877 | orchestrator | common : Restart fluentd container ------------------------------------- 33.72s 2026-01-01 00:52:20.660888 | orchestrator | common : Copying over fluentd.conf ------------------------------------- 10.32s 2026-01-01 00:52:20.660900 | orchestrator | common : Restart cron container ----------------------------------------- 9.47s 2026-01-01 00:52:20.660918 | orchestrator | common : Copying over config.json files for services -------------------- 7.41s 2026-01-01 00:52:20.660930 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 6.18s 2026-01-01 00:52:20.660940 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.97s 2026-01-01 00:52:20.660951 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.38s 2026-01-01 00:52:20.660962 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.52s 2026-01-01 00:52:20.660973 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 4.29s 2026-01-01 00:52:20.660984 | orchestrator | common : Copying over kolla.target -------------------------------------- 4.03s 2026-01-01 00:52:20.660996 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.86s 2026-01-01 00:52:20.661007 | orchestrator | service-check-containers : common | Check containers -------------------- 3.40s 2026-01-01 00:52:20.661018 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.07s 2026-01-01 00:52:20.661029 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.79s 2026-01-01 00:52:20.661041 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.34s 2026-01-01 00:52:20.661052 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.32s 2026-01-01 00:52:20.661063 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.14s 2026-01-01 00:52:20.661074 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.11s 2026-01-01 00:52:20.661085 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 1.81s 2026-01-01 00:52:23.726524 | orchestrator | 2026-01-01 00:52:23 | INFO  | Task a7899ab4-492b-4790-98c8-f86d34bf3f45 is in state STARTED 2026-01-01 00:52:23.726604 | orchestrator | 2026-01-01 00:52:23 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:23.726614 | orchestrator | 2026-01-01 00:52:23 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:23.726643 | orchestrator | 2026-01-01 00:52:23 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:23.726649 | orchestrator | 2026-01-01 00:52:23 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:23.726656 | orchestrator | 2026-01-01 00:52:23 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:23.726663 | orchestrator | 2026-01-01 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:26.749108 | orchestrator | 2026-01-01 00:52:26 | INFO  | Task a7899ab4-492b-4790-98c8-f86d34bf3f45 is in state STARTED 2026-01-01 00:52:26.749651 | orchestrator | 2026-01-01 00:52:26 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:26.750235 | orchestrator | 2026-01-01 00:52:26 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:26.750971 | orchestrator | 2026-01-01 00:52:26 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:26.751762 | orchestrator | 2026-01-01 00:52:26 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:26.752592 | orchestrator | 2026-01-01 00:52:26 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:26.752615 | orchestrator | 2026-01-01 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:29.790283 | orchestrator | 2026-01-01 00:52:29 | INFO  | Task a7899ab4-492b-4790-98c8-f86d34bf3f45 is in state STARTED 2026-01-01 00:52:29.790673 | orchestrator | 2026-01-01 00:52:29 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:29.791389 | orchestrator | 2026-01-01 00:52:29 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:29.792060 | orchestrator | 2026-01-01 00:52:29 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:29.793571 | orchestrator | 2026-01-01 00:52:29 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:29.794290 | orchestrator | 2026-01-01 00:52:29 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:29.794340 | orchestrator | 2026-01-01 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:32.926622 | orchestrator | 2026-01-01 00:52:32 | INFO  | Task a7899ab4-492b-4790-98c8-f86d34bf3f45 is in state STARTED 2026-01-01 00:52:32.926833 | orchestrator | 2026-01-01 00:52:32 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:32.926852 | orchestrator | 2026-01-01 00:52:32 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:32.926888 | orchestrator | 2026-01-01 00:52:32 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:32.926899 | orchestrator | 2026-01-01 00:52:32 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:32.926908 | orchestrator | 2026-01-01 00:52:32 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:32.926917 | orchestrator | 2026-01-01 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:35.976539 | orchestrator | 2026-01-01 00:52:35 | INFO  | Task a7899ab4-492b-4790-98c8-f86d34bf3f45 is in state STARTED 2026-01-01 00:52:35.976627 | orchestrator | 2026-01-01 00:52:35 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:35.976847 | orchestrator | 2026-01-01 00:52:35 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:35.978006 | orchestrator | 2026-01-01 00:52:35 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:35.978867 | orchestrator | 2026-01-01 00:52:35 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:35.979656 | orchestrator | 2026-01-01 00:52:35 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:35.980680 | orchestrator | 2026-01-01 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:39.027319 | orchestrator | 2026-01-01 00:52:39 | INFO  | Task a7899ab4-492b-4790-98c8-f86d34bf3f45 is in state STARTED 2026-01-01 00:52:39.027707 | orchestrator | 2026-01-01 00:52:39 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:39.029139 | orchestrator | 2026-01-01 00:52:39 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:39.029298 | orchestrator | 2026-01-01 00:52:39 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:39.030445 | orchestrator | 2026-01-01 00:52:39 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:39.034303 | orchestrator | 2026-01-01 00:52:39 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:39.034341 | orchestrator | 2026-01-01 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:42.092424 | orchestrator | 2026-01-01 00:52:42 | INFO  | Task a7899ab4-492b-4790-98c8-f86d34bf3f45 is in state STARTED 2026-01-01 00:52:42.093146 | orchestrator | 2026-01-01 00:52:42 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:42.093940 | orchestrator | 2026-01-01 00:52:42 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:42.094917 | orchestrator | 2026-01-01 00:52:42 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:42.096266 | orchestrator | 2026-01-01 00:52:42 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:42.096431 | orchestrator | 2026-01-01 00:52:42 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:42.096568 | orchestrator | 2026-01-01 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:45.172425 | orchestrator | 2026-01-01 00:52:45 | INFO  | Task a7899ab4-492b-4790-98c8-f86d34bf3f45 is in state STARTED 2026-01-01 00:52:45.174663 | orchestrator | 2026-01-01 00:52:45 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:45.179856 | orchestrator | 2026-01-01 00:52:45 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:45.180409 | orchestrator | 2026-01-01 00:52:45 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:45.182440 | orchestrator | 2026-01-01 00:52:45 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:45.184843 | orchestrator | 2026-01-01 00:52:45 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:45.185098 | orchestrator | 2026-01-01 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:48.249518 | orchestrator | 2026-01-01 00:52:48 | INFO  | Task a7899ab4-492b-4790-98c8-f86d34bf3f45 is in state STARTED 2026-01-01 00:52:48.253450 | orchestrator | 2026-01-01 00:52:48 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:48.254867 | orchestrator | 2026-01-01 00:52:48 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:48.259497 | orchestrator | 2026-01-01 00:52:48 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:48.262529 | orchestrator | 2026-01-01 00:52:48 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:48.263347 | orchestrator | 2026-01-01 00:52:48 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:48.263371 | orchestrator | 2026-01-01 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:51.307656 | orchestrator | 2026-01-01 00:52:51.307831 | orchestrator | 2026-01-01 00:52:51.307844 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:52:51.307852 | orchestrator | 2026-01-01 00:52:51.307859 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:52:51.307866 | orchestrator | Thursday 01 January 2026 00:52:28 +0000 (0:00:00.489) 0:00:00.489 ****** 2026-01-01 00:52:51.307873 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:51.307881 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:51.307888 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:51.307894 | orchestrator | 2026-01-01 00:52:51.307901 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:52:51.307908 | orchestrator | Thursday 01 January 2026 00:52:28 +0000 (0:00:00.711) 0:00:01.201 ****** 2026-01-01 00:52:51.307916 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-01 00:52:51.307924 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-01 00:52:51.307930 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-01 00:52:51.307937 | orchestrator | 2026-01-01 00:52:51.307943 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-01 00:52:51.307949 | orchestrator | 2026-01-01 00:52:51.307955 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-01 00:52:51.307969 | orchestrator | Thursday 01 January 2026 00:52:29 +0000 (0:00:00.844) 0:00:02.046 ****** 2026-01-01 00:52:51.307985 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:52:51.307992 | orchestrator | 2026-01-01 00:52:51.307998 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-01 00:52:51.308004 | orchestrator | Thursday 01 January 2026 00:52:30 +0000 (0:00:00.790) 0:00:02.837 ****** 2026-01-01 00:52:51.308011 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-01 00:52:51.308018 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-01 00:52:51.308024 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-01 00:52:51.308030 | orchestrator | 2026-01-01 00:52:51.308036 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-01 00:52:51.308043 | orchestrator | Thursday 01 January 2026 00:52:31 +0000 (0:00:01.082) 0:00:03.919 ****** 2026-01-01 00:52:51.308049 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-01 00:52:51.308056 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-01 00:52:51.308062 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-01 00:52:51.308068 | orchestrator | 2026-01-01 00:52:51.308074 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-01-01 00:52:51.308081 | orchestrator | Thursday 01 January 2026 00:52:34 +0000 (0:00:02.673) 0:00:06.592 ****** 2026-01-01 00:52:51.308092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-01 00:52:51.308144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-01 00:52:51.308169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-01 00:52:51.308177 | orchestrator | 2026-01-01 00:52:51.308183 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-01-01 00:52:51.308190 | orchestrator | Thursday 01 January 2026 00:52:36 +0000 (0:00:02.426) 0:00:09.025 ****** 2026-01-01 00:52:51.308196 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 00:52:51.308203 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:52:51.308210 | orchestrator | } 2026-01-01 00:52:51.308216 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:52:51.308223 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:52:51.308229 | orchestrator | } 2026-01-01 00:52:51.308235 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:52:51.308242 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:52:51.308248 | orchestrator | } 2026-01-01 00:52:51.308254 | orchestrator | 2026-01-01 00:52:51.308260 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 00:52:51.308267 | orchestrator | Thursday 01 January 2026 00:52:37 +0000 (0:00:00.947) 0:00:09.973 ****** 2026-01-01 00:52:51.308273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-01 00:52:51.308280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-01 00:52:51.308295 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:51.308301 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:51.308313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-01 00:52:51.308319 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:51.308326 | orchestrator | 2026-01-01 00:52:51.308332 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-01 00:52:51.308338 | orchestrator | Thursday 01 January 2026 00:52:40 +0000 (0:00:03.197) 0:00:13.171 ****** 2026-01-01 00:52:51.308345 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:51.308351 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:51.308357 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:51.308363 | orchestrator | 2026-01-01 00:52:51.308369 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:52:51.308377 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:52:51.308386 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:52:51.308392 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:52:51.308398 | orchestrator | 2026-01-01 00:52:51.308404 | orchestrator | 2026-01-01 00:52:51.308411 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:52:51.308418 | orchestrator | Thursday 01 January 2026 00:52:47 +0000 (0:00:07.108) 0:00:20.279 ****** 2026-01-01 00:52:51.308428 | orchestrator | =============================================================================== 2026-01-01 00:52:51.308435 | orchestrator | memcached : Restart memcached container --------------------------------- 7.11s 2026-01-01 00:52:51.308441 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.20s 2026-01-01 00:52:51.308447 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.67s 2026-01-01 00:52:51.308454 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.43s 2026-01-01 00:52:51.308460 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.08s 2026-01-01 00:52:51.308466 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.95s 2026-01-01 00:52:51.308472 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2026-01-01 00:52:51.308478 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.79s 2026-01-01 00:52:51.308485 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2026-01-01 00:52:51.308492 | orchestrator | 2026-01-01 00:52:51 | INFO  | Task a7899ab4-492b-4790-98c8-f86d34bf3f45 is in state SUCCESS 2026-01-01 00:52:51.308499 | orchestrator | 2026-01-01 00:52:51 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:51.308785 | orchestrator | 2026-01-01 00:52:51 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:52:51.310011 | orchestrator | 2026-01-01 00:52:51 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:51.310157 | orchestrator | 2026-01-01 00:52:51 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:51.311541 | orchestrator | 2026-01-01 00:52:51 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:51.311755 | orchestrator | 2026-01-01 00:52:51 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:51.311778 | orchestrator | 2026-01-01 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:54.548293 | orchestrator | 2026-01-01 00:52:54 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:54.548408 | orchestrator | 2026-01-01 00:52:54 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:52:54.548429 | orchestrator | 2026-01-01 00:52:54 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:54.548441 | orchestrator | 2026-01-01 00:52:54 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:54.548451 | orchestrator | 2026-01-01 00:52:54 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:54.548461 | orchestrator | 2026-01-01 00:52:54 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:54.548472 | orchestrator | 2026-01-01 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:57.579039 | orchestrator | 2026-01-01 00:52:57 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:52:57.580693 | orchestrator | 2026-01-01 00:52:57 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:52:57.581538 | orchestrator | 2026-01-01 00:52:57 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:52:57.585662 | orchestrator | 2026-01-01 00:52:57 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:52:57.586297 | orchestrator | 2026-01-01 00:52:57 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:52:57.587188 | orchestrator | 2026-01-01 00:52:57 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:52:57.587368 | orchestrator | 2026-01-01 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:00.619346 | orchestrator | 2026-01-01 00:53:00 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:00.623078 | orchestrator | 2026-01-01 00:53:00 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:00.624339 | orchestrator | 2026-01-01 00:53:00 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:53:00.625382 | orchestrator | 2026-01-01 00:53:00 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:00.626463 | orchestrator | 2026-01-01 00:53:00 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:00.627550 | orchestrator | 2026-01-01 00:53:00 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:00.627807 | orchestrator | 2026-01-01 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:03.712009 | orchestrator | 2026-01-01 00:53:03 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:03.713433 | orchestrator | 2026-01-01 00:53:03 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:03.715122 | orchestrator | 2026-01-01 00:53:03 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state STARTED 2026-01-01 00:53:03.716984 | orchestrator | 2026-01-01 00:53:03 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:03.719156 | orchestrator | 2026-01-01 00:53:03 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:03.723292 | orchestrator | 2026-01-01 00:53:03 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:03.723344 | orchestrator | 2026-01-01 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:06.759308 | orchestrator | 2026-01-01 00:53:06 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:06.759626 | orchestrator | 2026-01-01 00:53:06 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:06.762282 | orchestrator | 2026-01-01 00:53:06 | INFO  | Task 6b2d1810-81c9-4dec-9a71-a923faf9d282 is in state SUCCESS 2026-01-01 00:53:06.764811 | orchestrator | 2026-01-01 00:53:06.764919 | orchestrator | 2026-01-01 00:53:06.764947 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:53:06.764965 | orchestrator | 2026-01-01 00:53:06.764977 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:53:06.764990 | orchestrator | Thursday 01 January 2026 00:52:27 +0000 (0:00:00.272) 0:00:00.272 ****** 2026-01-01 00:53:06.765001 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:53:06.765014 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:53:06.765025 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:53:06.765036 | orchestrator | 2026-01-01 00:53:06.765047 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:53:06.765058 | orchestrator | Thursday 01 January 2026 00:52:28 +0000 (0:00:00.534) 0:00:00.808 ****** 2026-01-01 00:53:06.765069 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-01 00:53:06.765081 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-01 00:53:06.765091 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-01 00:53:06.765103 | orchestrator | 2026-01-01 00:53:06.765114 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-01 00:53:06.765124 | orchestrator | 2026-01-01 00:53:06.765135 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-01 00:53:06.765146 | orchestrator | Thursday 01 January 2026 00:52:28 +0000 (0:00:00.628) 0:00:01.437 ****** 2026-01-01 00:53:06.765157 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:53:06.765168 | orchestrator | 2026-01-01 00:53:06.765241 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-01 00:53:06.765253 | orchestrator | Thursday 01 January 2026 00:52:29 +0000 (0:00:00.890) 0:00:02.327 ****** 2026-01-01 00:53:06.765278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765410 | orchestrator | 2026-01-01 00:53:06.765424 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-01 00:53:06.765437 | orchestrator | Thursday 01 January 2026 00:52:31 +0000 (0:00:01.426) 0:00:03.754 ****** 2026-01-01 00:53:06.765450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765555 | orchestrator | 2026-01-01 00:53:06.765568 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-01 00:53:06.765580 | orchestrator | Thursday 01 January 2026 00:52:35 +0000 (0:00:04.669) 0:00:08.423 ****** 2026-01-01 00:53:06.765594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765692 | orchestrator | 2026-01-01 00:53:06.765703 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-01-01 00:53:06.765739 | orchestrator | Thursday 01 January 2026 00:52:40 +0000 (0:00:04.578) 0:00:13.001 ****** 2026-01-01 00:53:06.765751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.765842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.767404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:53:06.767450 | orchestrator | 2026-01-01 00:53:06.767462 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-01-01 00:53:06.767474 | orchestrator | Thursday 01 January 2026 00:52:43 +0000 (0:00:03.054) 0:00:16.056 ****** 2026-01-01 00:53:06.767485 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 00:53:06.767497 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:53:06.767508 | orchestrator | } 2026-01-01 00:53:06.767519 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:53:06.767530 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:53:06.767541 | orchestrator | } 2026-01-01 00:53:06.767552 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:53:06.767563 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:53:06.767574 | orchestrator | } 2026-01-01 00:53:06.767585 | orchestrator | 2026-01-01 00:53:06.767596 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 00:53:06.767625 | orchestrator | Thursday 01 January 2026 00:52:44 +0000 (0:00:00.915) 0:00:16.971 ****** 2026-01-01 00:53:06.767638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-01 00:53:06.767657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-01 00:53:06.767669 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:53:06.767680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-01 00:53:06.767692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-01 00:53:06.767703 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:53:06.767738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-01 00:53:06.767759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-01 00:53:06.767779 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:53:06.767790 | orchestrator | 2026-01-01 00:53:06.767801 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-01 00:53:06.767812 | orchestrator | Thursday 01 January 2026 00:52:45 +0000 (0:00:01.660) 0:00:18.632 ****** 2026-01-01 00:53:06.767823 | orchestrator | 2026-01-01 00:53:06.767834 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-01 00:53:06.767845 | orchestrator | Thursday 01 January 2026 00:52:46 +0000 (0:00:00.110) 0:00:18.742 ****** 2026-01-01 00:53:06.767856 | orchestrator | 2026-01-01 00:53:06.767867 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-01 00:53:06.767878 | orchestrator | Thursday 01 January 2026 00:52:46 +0000 (0:00:00.080) 0:00:18.823 ****** 2026-01-01 00:53:06.767889 | orchestrator | 2026-01-01 00:53:06.767900 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-01 00:53:06.767911 | orchestrator | Thursday 01 January 2026 00:52:46 +0000 (0:00:00.107) 0:00:18.931 ****** 2026-01-01 00:53:06.767922 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:53:06.767933 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:53:06.767944 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:53:06.767955 | orchestrator | 2026-01-01 00:53:06.767966 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-01 00:53:06.767977 | orchestrator | Thursday 01 January 2026 00:52:55 +0000 (0:00:08.882) 0:00:27.813 ****** 2026-01-01 00:53:06.767988 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:53:06.767998 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:53:06.768009 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:53:06.768020 | orchestrator | 2026-01-01 00:53:06.768036 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:53:06.768048 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:53:06.768061 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:53:06.768072 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:53:06.768083 | orchestrator | 2026-01-01 00:53:06.768094 | orchestrator | 2026-01-01 00:53:06.768104 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:53:06.768115 | orchestrator | Thursday 01 January 2026 00:53:04 +0000 (0:00:09.044) 0:00:36.858 ****** 2026-01-01 00:53:06.768126 | orchestrator | =============================================================================== 2026-01-01 00:53:06.768137 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.04s 2026-01-01 00:53:06.768148 | orchestrator | redis : Restart redis container ----------------------------------------- 8.88s 2026-01-01 00:53:06.768159 | orchestrator | redis : Copying over default config.json files -------------------------- 4.67s 2026-01-01 00:53:06.768170 | orchestrator | redis : Copying over redis config files --------------------------------- 4.58s 2026-01-01 00:53:06.768181 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.05s 2026-01-01 00:53:06.768191 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.66s 2026-01-01 00:53:06.768202 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.43s 2026-01-01 00:53:06.768213 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.92s 2026-01-01 00:53:06.768224 | orchestrator | redis : include_tasks --------------------------------------------------- 0.89s 2026-01-01 00:53:06.768235 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-01-01 00:53:06.768246 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2026-01-01 00:53:06.768263 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.30s 2026-01-01 00:53:06.768274 | orchestrator | 2026-01-01 00:53:06 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:06.768285 | orchestrator | 2026-01-01 00:53:06 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:06.768296 | orchestrator | 2026-01-01 00:53:06 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:06.768307 | orchestrator | 2026-01-01 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:09.812180 | orchestrator | 2026-01-01 00:53:09 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:09.813981 | orchestrator | 2026-01-01 00:53:09 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:09.814971 | orchestrator | 2026-01-01 00:53:09 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:09.816078 | orchestrator | 2026-01-01 00:53:09 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:09.817116 | orchestrator | 2026-01-01 00:53:09 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:09.819240 | orchestrator | 2026-01-01 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:12.851777 | orchestrator | 2026-01-01 00:53:12 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:12.853947 | orchestrator | 2026-01-01 00:53:12 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:12.854559 | orchestrator | 2026-01-01 00:53:12 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:12.855304 | orchestrator | 2026-01-01 00:53:12 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:12.856023 | orchestrator | 2026-01-01 00:53:12 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:12.856044 | orchestrator | 2026-01-01 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:15.890788 | orchestrator | 2026-01-01 00:53:15 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:15.890955 | orchestrator | 2026-01-01 00:53:15 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:15.891634 | orchestrator | 2026-01-01 00:53:15 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:15.892592 | orchestrator | 2026-01-01 00:53:15 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:15.893296 | orchestrator | 2026-01-01 00:53:15 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:15.893345 | orchestrator | 2026-01-01 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:18.921423 | orchestrator | 2026-01-01 00:53:18 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:18.921516 | orchestrator | 2026-01-01 00:53:18 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:18.921755 | orchestrator | 2026-01-01 00:53:18 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:18.922594 | orchestrator | 2026-01-01 00:53:18 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:18.923480 | orchestrator | 2026-01-01 00:53:18 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:18.923541 | orchestrator | 2026-01-01 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:21.960142 | orchestrator | 2026-01-01 00:53:21 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:21.962141 | orchestrator | 2026-01-01 00:53:21 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:21.963808 | orchestrator | 2026-01-01 00:53:21 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:21.967414 | orchestrator | 2026-01-01 00:53:21 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:21.975184 | orchestrator | 2026-01-01 00:53:21 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:21.975256 | orchestrator | 2026-01-01 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:25.012775 | orchestrator | 2026-01-01 00:53:25 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:25.019377 | orchestrator | 2026-01-01 00:53:25 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:25.023007 | orchestrator | 2026-01-01 00:53:25 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:25.024915 | orchestrator | 2026-01-01 00:53:25 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:25.026748 | orchestrator | 2026-01-01 00:53:25 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:25.026780 | orchestrator | 2026-01-01 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:28.072686 | orchestrator | 2026-01-01 00:53:28 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:28.076193 | orchestrator | 2026-01-01 00:53:28 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:28.080352 | orchestrator | 2026-01-01 00:53:28 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:28.083271 | orchestrator | 2026-01-01 00:53:28 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:28.085339 | orchestrator | 2026-01-01 00:53:28 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:28.086079 | orchestrator | 2026-01-01 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:31.143994 | orchestrator | 2026-01-01 00:53:31 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:31.144258 | orchestrator | 2026-01-01 00:53:31 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:31.145794 | orchestrator | 2026-01-01 00:53:31 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:31.148421 | orchestrator | 2026-01-01 00:53:31 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:31.149848 | orchestrator | 2026-01-01 00:53:31 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:31.149889 | orchestrator | 2026-01-01 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:34.198888 | orchestrator | 2026-01-01 00:53:34 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:34.200557 | orchestrator | 2026-01-01 00:53:34 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:34.201811 | orchestrator | 2026-01-01 00:53:34 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:34.203822 | orchestrator | 2026-01-01 00:53:34 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:34.204866 | orchestrator | 2026-01-01 00:53:34 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:34.205274 | orchestrator | 2026-01-01 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:37.257324 | orchestrator | 2026-01-01 00:53:37 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:37.259660 | orchestrator | 2026-01-01 00:53:37 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:37.263042 | orchestrator | 2026-01-01 00:53:37 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:37.266240 | orchestrator | 2026-01-01 00:53:37 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:37.267325 | orchestrator | 2026-01-01 00:53:37 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:37.267462 | orchestrator | 2026-01-01 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:40.318838 | orchestrator | 2026-01-01 00:53:40 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:40.319520 | orchestrator | 2026-01-01 00:53:40 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:40.322609 | orchestrator | 2026-01-01 00:53:40 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:40.327644 | orchestrator | 2026-01-01 00:53:40 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:40.328665 | orchestrator | 2026-01-01 00:53:40 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:40.328755 | orchestrator | 2026-01-01 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:43.370561 | orchestrator | 2026-01-01 00:53:43 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:43.372462 | orchestrator | 2026-01-01 00:53:43 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:43.378132 | orchestrator | 2026-01-01 00:53:43 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:43.379521 | orchestrator | 2026-01-01 00:53:43 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:43.384964 | orchestrator | 2026-01-01 00:53:43 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:43.385024 | orchestrator | 2026-01-01 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:46.453871 | orchestrator | 2026-01-01 00:53:46 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:46.453982 | orchestrator | 2026-01-01 00:53:46 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:46.454073 | orchestrator | 2026-01-01 00:53:46 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:46.455131 | orchestrator | 2026-01-01 00:53:46 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state STARTED 2026-01-01 00:53:46.455814 | orchestrator | 2026-01-01 00:53:46 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:46.455831 | orchestrator | 2026-01-01 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:49.491754 | orchestrator | 2026-01-01 00:53:49 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:49.492307 | orchestrator | 2026-01-01 00:53:49 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:53:49.494877 | orchestrator | 2026-01-01 00:53:49 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:49.495749 | orchestrator | 2026-01-01 00:53:49 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:49.498540 | orchestrator | 2026-01-01 00:53:49 | INFO  | Task 2f8e23fb-f777-47c9-9022-d643a6681f19 is in state SUCCESS 2026-01-01 00:53:49.500483 | orchestrator | 2026-01-01 00:53:49.500518 | orchestrator | 2026-01-01 00:53:49.500531 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:53:49.500542 | orchestrator | 2026-01-01 00:53:49.500554 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:53:49.500565 | orchestrator | Thursday 01 January 2026 00:52:27 +0000 (0:00:00.411) 0:00:00.411 ****** 2026-01-01 00:53:49.500577 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:53:49.500589 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:53:49.500600 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:53:49.500610 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:53:49.500621 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:53:49.500649 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:53:49.500660 | orchestrator | 2026-01-01 00:53:49.500672 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:53:49.500683 | orchestrator | Thursday 01 January 2026 00:52:29 +0000 (0:00:01.211) 0:00:01.622 ****** 2026-01-01 00:53:49.500816 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:53:49.500835 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:53:49.500847 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:53:49.500857 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:53:49.500868 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:53:49.500879 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:53:49.500890 | orchestrator | 2026-01-01 00:53:49.500901 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-01 00:53:49.500911 | orchestrator | 2026-01-01 00:53:49.500922 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-01 00:53:49.500933 | orchestrator | Thursday 01 January 2026 00:52:30 +0000 (0:00:01.001) 0:00:02.623 ****** 2026-01-01 00:53:49.500945 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:53:49.500958 | orchestrator | 2026-01-01 00:53:49.500968 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-01 00:53:49.500979 | orchestrator | Thursday 01 January 2026 00:52:32 +0000 (0:00:02.033) 0:00:04.657 ****** 2026-01-01 00:53:49.501036 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-01 00:53:49.501049 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-01 00:53:49.501059 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-01 00:53:49.501070 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-01 00:53:49.501081 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-01 00:53:49.501092 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-01 00:53:49.501102 | orchestrator | 2026-01-01 00:53:49.501113 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-01 00:53:49.501124 | orchestrator | Thursday 01 January 2026 00:52:34 +0000 (0:00:02.069) 0:00:06.727 ****** 2026-01-01 00:53:49.501135 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-01 00:53:49.501146 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-01 00:53:49.501157 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-01 00:53:49.501184 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-01 00:53:49.501196 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-01 00:53:49.501206 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-01 00:53:49.501218 | orchestrator | 2026-01-01 00:53:49.501229 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-01 00:53:49.501245 | orchestrator | Thursday 01 January 2026 00:52:37 +0000 (0:00:03.646) 0:00:10.374 ****** 2026-01-01 00:53:49.501263 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-01 00:53:49.501287 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:53:49.501315 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-01 00:53:49.501331 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-01 00:53:49.501350 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:53:49.501367 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-01 00:53:49.501378 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:53:49.501389 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-01 00:53:49.501400 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:53:49.501411 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:53:49.501421 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-01 00:53:49.501432 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:53:49.501442 | orchestrator | 2026-01-01 00:53:49.501453 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-01 00:53:49.501464 | orchestrator | Thursday 01 January 2026 00:52:40 +0000 (0:00:02.862) 0:00:13.236 ****** 2026-01-01 00:53:49.501475 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:53:49.501485 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:53:49.501496 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:53:49.501506 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:53:49.501517 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:53:49.501528 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:53:49.501538 | orchestrator | 2026-01-01 00:53:49.501549 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-01 00:53:49.501560 | orchestrator | Thursday 01 January 2026 00:52:42 +0000 (0:00:01.616) 0:00:14.852 ****** 2026-01-01 00:53:49.501599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501856 | orchestrator | 2026-01-01 00:53:49.501867 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-01 00:53:49.501878 | orchestrator | Thursday 01 January 2026 00:52:45 +0000 (0:00:02.675) 0:00:17.528 ****** 2026-01-01 00:53:49.501896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501927 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501968 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.501985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502122 | orchestrator | 2026-01-01 00:53:49.502138 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-01 00:53:49.502150 | orchestrator | Thursday 01 January 2026 00:52:49 +0000 (0:00:03.969) 0:00:21.497 ****** 2026-01-01 00:53:49.502161 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:53:49.502179 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:53:49.502190 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:53:49.502201 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:53:49.502212 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:53:49.502223 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:53:49.502234 | orchestrator | 2026-01-01 00:53:49.502245 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-01-01 00:53:49.502256 | orchestrator | Thursday 01 January 2026 00:52:50 +0000 (0:00:01.239) 0:00:22.737 ****** 2026-01-01 00:53:49.502267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502302 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502320 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.502350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.507439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:53:49.507495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.507506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.507531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.507558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:53:49.507568 | orchestrator | 2026-01-01 00:53:49.507578 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-01-01 00:53:49.507589 | orchestrator | Thursday 01 January 2026 00:52:53 +0000 (0:00:03.404) 0:00:26.142 ****** 2026-01-01 00:53:49.507599 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 00:53:49.507610 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:53:49.507620 | orchestrator | } 2026-01-01 00:53:49.507630 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:53:49.507640 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:53:49.507650 | orchestrator | } 2026-01-01 00:53:49.507659 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:53:49.507669 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:53:49.507679 | orchestrator | } 2026-01-01 00:53:49.507688 | orchestrator | changed: [testbed-node-3] => { 2026-01-01 00:53:49.507861 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:53:49.507878 | orchestrator | } 2026-01-01 00:53:49.507888 | orchestrator | changed: [testbed-node-4] => { 2026-01-01 00:53:49.507897 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:53:49.507907 | orchestrator | } 2026-01-01 00:53:49.507917 | orchestrator | changed: [testbed-node-5] => { 2026-01-01 00:53:49.507926 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:53:49.507936 | orchestrator | } 2026-01-01 00:53:49.507946 | orchestrator | 2026-01-01 00:53:49.507956 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 00:53:49.507966 | orchestrator | Thursday 01 January 2026 00:52:56 +0000 (0:00:02.515) 0:00:28.658 ****** 2026-01-01 00:53:49.507985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-01 00:53:49.507997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-01 00:53:49.508021 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:53:49.508042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-01 00:53:49.508051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-01 00:53:49.508060 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:53:49.508068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-01 00:53:49.508080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-01 00:53:49.508089 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:53:49.508097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-01 00:53:49.508106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-01 00:53:49.508138 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:53:49.508154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-01 00:53:49.508163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-01 00:53:49.508171 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:53:49.508179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-01 00:53:49.508192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-01 00:53:49.508200 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:53:49.508208 | orchestrator | 2026-01-01 00:53:49.508216 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:53:49.508224 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:03.161) 0:00:31.819 ****** 2026-01-01 00:53:49.508232 | orchestrator | 2026-01-01 00:53:49.508240 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:53:49.508253 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:00.228) 0:00:32.048 ****** 2026-01-01 00:53:49.508261 | orchestrator | 2026-01-01 00:53:49.508269 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:53:49.508277 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:00.146) 0:00:32.195 ****** 2026-01-01 00:53:49.508285 | orchestrator | 2026-01-01 00:53:49.508293 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:53:49.508301 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:00.161) 0:00:32.356 ****** 2026-01-01 00:53:49.508309 | orchestrator | 2026-01-01 00:53:49.508317 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:53:49.508325 | orchestrator | Thursday 01 January 2026 00:53:00 +0000 (0:00:00.558) 0:00:32.914 ****** 2026-01-01 00:53:49.508333 | orchestrator | 2026-01-01 00:53:49.508341 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:53:49.508349 | orchestrator | Thursday 01 January 2026 00:53:00 +0000 (0:00:00.127) 0:00:33.042 ****** 2026-01-01 00:53:49.508357 | orchestrator | 2026-01-01 00:53:49.508365 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-01 00:53:49.508373 | orchestrator | Thursday 01 January 2026 00:53:00 +0000 (0:00:00.131) 0:00:33.173 ****** 2026-01-01 00:53:49.508381 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:53:49.508389 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:53:49.508397 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:53:49.508405 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:53:49.508413 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:53:49.508421 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:53:49.508429 | orchestrator | 2026-01-01 00:53:49.508437 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-01 00:53:49.508449 | orchestrator | Thursday 01 January 2026 00:53:09 +0000 (0:00:08.856) 0:00:42.030 ****** 2026-01-01 00:53:49.508458 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:53:49.508467 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:53:49.508475 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:53:49.508483 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:53:49.508491 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:53:49.508498 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:53:49.508506 | orchestrator | 2026-01-01 00:53:49.508514 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-01 00:53:49.508522 | orchestrator | Thursday 01 January 2026 00:53:12 +0000 (0:00:02.803) 0:00:44.833 ****** 2026-01-01 00:53:49.508530 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:53:49.508538 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:53:49.508546 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:53:49.508554 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:53:49.508562 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:53:49.508570 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:53:49.508578 | orchestrator | 2026-01-01 00:53:49.508586 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-01 00:53:49.508594 | orchestrator | Thursday 01 January 2026 00:53:22 +0000 (0:00:10.333) 0:00:55.166 ****** 2026-01-01 00:53:49.508603 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-01 00:53:49.508611 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-01 00:53:49.508619 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-01 00:53:49.508627 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-01 00:53:49.508635 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-01 00:53:49.508648 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-01 00:53:49.508656 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-01 00:53:49.508664 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-01 00:53:49.508672 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-01 00:53:49.508680 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-01 00:53:49.508688 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-01 00:53:49.508720 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-01 00:53:49.508728 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:53:49.508736 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:53:49.508744 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:53:49.508752 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:53:49.508760 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:53:49.508768 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:53:49.508775 | orchestrator | 2026-01-01 00:53:49.508783 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-01 00:53:49.508791 | orchestrator | Thursday 01 January 2026 00:53:31 +0000 (0:00:08.826) 0:01:03.993 ****** 2026-01-01 00:53:49.508800 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-01 00:53:49.508808 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:53:49.508816 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-01 00:53:49.508823 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:53:49.508831 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-01 00:53:49.508839 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:53:49.508847 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-01 00:53:49.508855 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-01 00:53:49.508863 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-01 00:53:49.508871 | orchestrator | 2026-01-01 00:53:49.508879 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-01 00:53:49.508887 | orchestrator | Thursday 01 January 2026 00:53:34 +0000 (0:00:02.941) 0:01:06.935 ****** 2026-01-01 00:53:49.508895 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-01 00:53:49.508903 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-01 00:53:49.508911 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:53:49.508919 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:53:49.508927 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-01 00:53:49.508935 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:53:49.508943 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-01 00:53:49.508956 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-01 00:53:49.508965 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-01 00:53:49.508973 | orchestrator | 2026-01-01 00:53:49.508981 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-01 00:53:49.508989 | orchestrator | Thursday 01 January 2026 00:53:38 +0000 (0:00:04.318) 0:01:11.253 ****** 2026-01-01 00:53:49.509002 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:53:49.509010 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:53:49.509018 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:53:49.509026 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:53:49.509034 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:53:49.509042 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:53:49.509049 | orchestrator | 2026-01-01 00:53:49.509057 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:53:49.509066 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:53:49.509075 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:53:49.509083 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:53:49.509091 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-01 00:53:49.509099 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-01 00:53:49.509107 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-01 00:53:49.509115 | orchestrator | 2026-01-01 00:53:49.509123 | orchestrator | 2026-01-01 00:53:49.509131 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:53:49.509139 | orchestrator | Thursday 01 January 2026 00:53:46 +0000 (0:00:07.926) 0:01:19.180 ****** 2026-01-01 00:53:49.509147 | orchestrator | =============================================================================== 2026-01-01 00:53:49.509155 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.26s 2026-01-01 00:53:49.509163 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.86s 2026-01-01 00:53:49.509171 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.83s 2026-01-01 00:53:49.509179 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.32s 2026-01-01 00:53:49.509190 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.97s 2026-01-01 00:53:49.509198 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.65s 2026-01-01 00:53:49.509206 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.40s 2026-01-01 00:53:49.509214 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.16s 2026-01-01 00:53:49.509222 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.94s 2026-01-01 00:53:49.509230 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.86s 2026-01-01 00:53:49.509237 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.80s 2026-01-01 00:53:49.509245 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.68s 2026-01-01 00:53:49.509253 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.52s 2026-01-01 00:53:49.509261 | orchestrator | module-load : Load modules ---------------------------------------------- 2.07s 2026-01-01 00:53:49.509269 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.03s 2026-01-01 00:53:49.509277 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.62s 2026-01-01 00:53:49.509285 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.35s 2026-01-01 00:53:49.509293 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.24s 2026-01-01 00:53:49.509305 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.21s 2026-01-01 00:53:49.509313 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2026-01-01 00:53:49.509321 | orchestrator | 2026-01-01 00:53:49 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:49.509329 | orchestrator | 2026-01-01 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:52.533212 | orchestrator | 2026-01-01 00:53:52 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:52.533668 | orchestrator | 2026-01-01 00:53:52 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:53:52.534460 | orchestrator | 2026-01-01 00:53:52 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:52.536006 | orchestrator | 2026-01-01 00:53:52 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:52.539333 | orchestrator | 2026-01-01 00:53:52 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:52.539369 | orchestrator | 2026-01-01 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:55.582343 | orchestrator | 2026-01-01 00:53:55 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:55.583079 | orchestrator | 2026-01-01 00:53:55 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:53:55.583114 | orchestrator | 2026-01-01 00:53:55 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:55.584098 | orchestrator | 2026-01-01 00:53:55 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:55.584603 | orchestrator | 2026-01-01 00:53:55 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:55.584774 | orchestrator | 2026-01-01 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:58.662947 | orchestrator | 2026-01-01 00:53:58 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:53:58.663079 | orchestrator | 2026-01-01 00:53:58 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:53:58.663096 | orchestrator | 2026-01-01 00:53:58 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:53:58.663108 | orchestrator | 2026-01-01 00:53:58 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:53:58.663120 | orchestrator | 2026-01-01 00:53:58 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:53:58.663131 | orchestrator | 2026-01-01 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:01.706946 | orchestrator | 2026-01-01 00:54:01 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:54:01.707102 | orchestrator | 2026-01-01 00:54:01 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:01.707883 | orchestrator | 2026-01-01 00:54:01 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:01.708885 | orchestrator | 2026-01-01 00:54:01 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:01.709880 | orchestrator | 2026-01-01 00:54:01 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:01.709947 | orchestrator | 2026-01-01 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:04.919217 | orchestrator | 2026-01-01 00:54:04 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:54:04.920156 | orchestrator | 2026-01-01 00:54:04 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:04.921846 | orchestrator | 2026-01-01 00:54:04 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:04.923109 | orchestrator | 2026-01-01 00:54:04 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:04.924649 | orchestrator | 2026-01-01 00:54:04 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:04.924676 | orchestrator | 2026-01-01 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:07.980427 | orchestrator | 2026-01-01 00:54:07 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:54:07.980945 | orchestrator | 2026-01-01 00:54:07 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:07.982657 | orchestrator | 2026-01-01 00:54:07 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:07.984733 | orchestrator | 2026-01-01 00:54:07 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:07.986352 | orchestrator | 2026-01-01 00:54:07 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:07.986390 | orchestrator | 2026-01-01 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:11.048990 | orchestrator | 2026-01-01 00:54:11 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:54:11.049235 | orchestrator | 2026-01-01 00:54:11 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:11.050173 | orchestrator | 2026-01-01 00:54:11 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:11.050919 | orchestrator | 2026-01-01 00:54:11 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:11.051917 | orchestrator | 2026-01-01 00:54:11 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:11.051944 | orchestrator | 2026-01-01 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:14.102315 | orchestrator | 2026-01-01 00:54:14 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:54:14.102421 | orchestrator | 2026-01-01 00:54:14 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:14.102885 | orchestrator | 2026-01-01 00:54:14 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:14.103902 | orchestrator | 2026-01-01 00:54:14 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:14.104334 | orchestrator | 2026-01-01 00:54:14 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:14.104367 | orchestrator | 2026-01-01 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:17.810348 | orchestrator | 2026-01-01 00:54:17 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:54:17.810457 | orchestrator | 2026-01-01 00:54:17 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:17.810473 | orchestrator | 2026-01-01 00:54:17 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:17.810486 | orchestrator | 2026-01-01 00:54:17 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:17.810497 | orchestrator | 2026-01-01 00:54:17 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:17.810536 | orchestrator | 2026-01-01 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:21.194293 | orchestrator | 2026-01-01 00:54:21 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:54:21.195315 | orchestrator | 2026-01-01 00:54:21 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:21.196179 | orchestrator | 2026-01-01 00:54:21 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:21.198472 | orchestrator | 2026-01-01 00:54:21 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:21.199440 | orchestrator | 2026-01-01 00:54:21 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:21.199475 | orchestrator | 2026-01-01 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:24.260397 | orchestrator | 2026-01-01 00:54:24 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state STARTED 2026-01-01 00:54:24.260508 | orchestrator | 2026-01-01 00:54:24 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:24.264139 | orchestrator | 2026-01-01 00:54:24 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:24.264168 | orchestrator | 2026-01-01 00:54:24 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:24.264180 | orchestrator | 2026-01-01 00:54:24 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:24.264192 | orchestrator | 2026-01-01 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:27.471925 | orchestrator | 2026-01-01 00:54:27 | INFO  | Task 9f628ba7-cb05-468e-8e9f-fe5b42831951 is in state SUCCESS 2026-01-01 00:54:27.472999 | orchestrator | 2026-01-01 00:54:27.473070 | orchestrator | 2026-01-01 00:54:27.473113 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-01 00:54:27.473127 | orchestrator | 2026-01-01 00:54:27.473138 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-01 00:54:27.473215 | orchestrator | Thursday 01 January 2026 00:49:27 +0000 (0:00:00.169) 0:00:00.169 ****** 2026-01-01 00:54:27.473228 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:54:27.473240 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:54:27.473252 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:54:27.473263 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.473274 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.473285 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.473296 | orchestrator | 2026-01-01 00:54:27.473307 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-01 00:54:27.473319 | orchestrator | Thursday 01 January 2026 00:49:27 +0000 (0:00:00.695) 0:00:00.864 ****** 2026-01-01 00:54:27.473332 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.473344 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.473355 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.473366 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.473377 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.473388 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.473399 | orchestrator | 2026-01-01 00:54:27.473410 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-01 00:54:27.473422 | orchestrator | Thursday 01 January 2026 00:49:28 +0000 (0:00:00.634) 0:00:01.499 ****** 2026-01-01 00:54:27.473433 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.473444 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.473455 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.473466 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.473477 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.473488 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.473526 | orchestrator | 2026-01-01 00:54:27.473538 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-01 00:54:27.473549 | orchestrator | Thursday 01 January 2026 00:49:29 +0000 (0:00:00.707) 0:00:02.206 ****** 2026-01-01 00:54:27.473561 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:54:27.473571 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:54:27.473582 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:54:27.473597 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.473610 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.473622 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.473635 | orchestrator | 2026-01-01 00:54:27.473648 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-01 00:54:27.473662 | orchestrator | Thursday 01 January 2026 00:49:31 +0000 (0:00:02.415) 0:00:04.621 ****** 2026-01-01 00:54:27.473702 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:54:27.473714 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:54:27.473726 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:54:27.473736 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.473747 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.473758 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.473769 | orchestrator | 2026-01-01 00:54:27.473780 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-01 00:54:27.473791 | orchestrator | Thursday 01 January 2026 00:49:32 +0000 (0:00:01.422) 0:00:06.044 ****** 2026-01-01 00:54:27.473802 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:54:27.473812 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:54:27.473823 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:54:27.473834 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.473845 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.473856 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.473866 | orchestrator | 2026-01-01 00:54:27.473877 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-01 00:54:27.473888 | orchestrator | Thursday 01 January 2026 00:49:34 +0000 (0:00:01.192) 0:00:07.237 ****** 2026-01-01 00:54:27.473899 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.473910 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.473921 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.473932 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.473943 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.473953 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.473964 | orchestrator | 2026-01-01 00:54:27.473976 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-01 00:54:27.473987 | orchestrator | Thursday 01 January 2026 00:49:35 +0000 (0:00:01.152) 0:00:08.389 ****** 2026-01-01 00:54:27.473998 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.474009 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.474098 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.474110 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.474120 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.474131 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.474142 | orchestrator | 2026-01-01 00:54:27.474153 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-01 00:54:27.474164 | orchestrator | Thursday 01 January 2026 00:49:36 +0000 (0:00:00.973) 0:00:09.362 ****** 2026-01-01 00:54:27.474175 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:54:27.474186 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:54:27.474197 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.474208 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:54:27.474219 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:54:27.474230 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.474250 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:54:27.474261 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:54:27.474272 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.474283 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:54:27.474309 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:54:27.474321 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.474332 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:54:27.474343 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:54:27.474354 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.474364 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:54:27.474375 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:54:27.474386 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.474397 | orchestrator | 2026-01-01 00:54:27.474408 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-01 00:54:27.474419 | orchestrator | Thursday 01 January 2026 00:49:37 +0000 (0:00:01.083) 0:00:10.445 ****** 2026-01-01 00:54:27.474430 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.474440 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.474451 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.474462 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.474473 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.474484 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.474495 | orchestrator | 2026-01-01 00:54:27.474506 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-01 00:54:27.474518 | orchestrator | Thursday 01 January 2026 00:49:39 +0000 (0:00:01.724) 0:00:12.170 ****** 2026-01-01 00:54:27.474529 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:54:27.474540 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:54:27.474551 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:54:27.474562 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.474572 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.474583 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.474594 | orchestrator | 2026-01-01 00:54:27.474605 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-01 00:54:27.474616 | orchestrator | Thursday 01 January 2026 00:49:40 +0000 (0:00:00.990) 0:00:13.160 ****** 2026-01-01 00:54:27.474627 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.474638 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.474649 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:54:27.474659 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:54:27.474694 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.474713 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:54:27.474732 | orchestrator | 2026-01-01 00:54:27.474751 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-01 00:54:27.474770 | orchestrator | Thursday 01 January 2026 00:49:46 +0000 (0:00:06.565) 0:00:19.726 ****** 2026-01-01 00:54:27.474784 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.474794 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.474805 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.474816 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.474842 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.474864 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.474875 | orchestrator | 2026-01-01 00:54:27.474886 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-01 00:54:27.474897 | orchestrator | Thursday 01 January 2026 00:49:48 +0000 (0:00:02.038) 0:00:21.765 ****** 2026-01-01 00:54:27.474907 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.474927 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.474938 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.474949 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.474959 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.474970 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.474981 | orchestrator | 2026-01-01 00:54:27.474993 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-01 00:54:27.475006 | orchestrator | Thursday 01 January 2026 00:49:51 +0000 (0:00:02.787) 0:00:24.552 ****** 2026-01-01 00:54:27.475016 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.475027 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.475038 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.475049 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.475059 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.475070 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.475081 | orchestrator | 2026-01-01 00:54:27.475092 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-01 00:54:27.475109 | orchestrator | Thursday 01 January 2026 00:49:53 +0000 (0:00:01.694) 0:00:26.247 ****** 2026-01-01 00:54:27.475120 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-01 00:54:27.475132 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-01 00:54:27.475143 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.475154 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-01 00:54:27.475165 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-01 00:54:27.475176 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.475187 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-01 00:54:27.475198 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-01 00:54:27.475209 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.475220 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-01 00:54:27.475230 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-01 00:54:27.475241 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-01 00:54:27.475252 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-01 00:54:27.475263 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.475274 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.475285 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-01 00:54:27.475296 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-01 00:54:27.475306 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.475317 | orchestrator | 2026-01-01 00:54:27.475328 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-01 00:54:27.475346 | orchestrator | Thursday 01 January 2026 00:49:55 +0000 (0:00:02.414) 0:00:28.662 ****** 2026-01-01 00:54:27.475358 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.475369 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.475380 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.475390 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.475401 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.475412 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.475423 | orchestrator | 2026-01-01 00:54:27.475434 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-01 00:54:27.475445 | orchestrator | Thursday 01 January 2026 00:49:56 +0000 (0:00:01.278) 0:00:29.940 ****** 2026-01-01 00:54:27.475456 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.475467 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.475478 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.475489 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.475499 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.475510 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.475527 | orchestrator | 2026-01-01 00:54:27.475538 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-01 00:54:27.475549 | orchestrator | 2026-01-01 00:54:27.475560 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-01 00:54:27.475571 | orchestrator | Thursday 01 January 2026 00:49:59 +0000 (0:00:02.335) 0:00:32.276 ****** 2026-01-01 00:54:27.475582 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.475593 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.475604 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.475615 | orchestrator | 2026-01-01 00:54:27.475626 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-01 00:54:27.475637 | orchestrator | Thursday 01 January 2026 00:50:02 +0000 (0:00:02.926) 0:00:35.202 ****** 2026-01-01 00:54:27.475648 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.475659 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.475669 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.475720 | orchestrator | 2026-01-01 00:54:27.475731 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-01 00:54:27.475742 | orchestrator | Thursday 01 January 2026 00:50:04 +0000 (0:00:02.649) 0:00:37.851 ****** 2026-01-01 00:54:27.475753 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.475763 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.475774 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.475785 | orchestrator | 2026-01-01 00:54:27.475796 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-01 00:54:27.475807 | orchestrator | Thursday 01 January 2026 00:50:05 +0000 (0:00:01.120) 0:00:38.972 ****** 2026-01-01 00:54:27.475817 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.475828 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.475838 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.475849 | orchestrator | 2026-01-01 00:54:27.475860 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-01 00:54:27.475871 | orchestrator | Thursday 01 January 2026 00:50:06 +0000 (0:00:00.854) 0:00:39.826 ****** 2026-01-01 00:54:27.475882 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.475893 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.475903 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.475914 | orchestrator | 2026-01-01 00:54:27.475925 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-01 00:54:27.475935 | orchestrator | Thursday 01 January 2026 00:50:07 +0000 (0:00:00.300) 0:00:40.126 ****** 2026-01-01 00:54:27.475946 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.475957 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.475968 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.475978 | orchestrator | 2026-01-01 00:54:27.475989 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-01 00:54:27.476000 | orchestrator | Thursday 01 January 2026 00:50:08 +0000 (0:00:01.838) 0:00:41.965 ****** 2026-01-01 00:54:27.476011 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.476021 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.476032 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.476043 | orchestrator | 2026-01-01 00:54:27.476054 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-01 00:54:27.476064 | orchestrator | Thursday 01 January 2026 00:50:10 +0000 (0:00:01.974) 0:00:43.939 ****** 2026-01-01 00:54:27.476075 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:54:27.476086 | orchestrator | 2026-01-01 00:54:27.476097 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-01 00:54:27.476113 | orchestrator | Thursday 01 January 2026 00:50:11 +0000 (0:00:00.642) 0:00:44.582 ****** 2026-01-01 00:54:27.476125 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.476136 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.476146 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.476157 | orchestrator | 2026-01-01 00:54:27.476175 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-01 00:54:27.476186 | orchestrator | Thursday 01 January 2026 00:50:16 +0000 (0:00:04.819) 0:00:49.402 ****** 2026-01-01 00:54:27.476197 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.476207 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.476218 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.476229 | orchestrator | 2026-01-01 00:54:27.476239 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-01 00:54:27.476250 | orchestrator | Thursday 01 January 2026 00:50:17 +0000 (0:00:01.277) 0:00:50.679 ****** 2026-01-01 00:54:27.476261 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.476272 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.476282 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.476293 | orchestrator | 2026-01-01 00:54:27.476304 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-01 00:54:27.476315 | orchestrator | Thursday 01 January 2026 00:50:19 +0000 (0:00:01.799) 0:00:52.478 ****** 2026-01-01 00:54:27.476326 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.476336 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.476347 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.476358 | orchestrator | 2026-01-01 00:54:27.476369 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-01 00:54:27.476387 | orchestrator | Thursday 01 January 2026 00:50:22 +0000 (0:00:02.843) 0:00:55.322 ****** 2026-01-01 00:54:27.476398 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.476409 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.476420 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.476431 | orchestrator | 2026-01-01 00:54:27.476442 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-01 00:54:27.476453 | orchestrator | Thursday 01 January 2026 00:50:23 +0000 (0:00:01.059) 0:00:56.382 ****** 2026-01-01 00:54:27.476464 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.476475 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.476486 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.476496 | orchestrator | 2026-01-01 00:54:27.476507 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-01 00:54:27.476518 | orchestrator | Thursday 01 January 2026 00:50:24 +0000 (0:00:01.160) 0:00:57.542 ****** 2026-01-01 00:54:27.476529 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.476540 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.476550 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.476561 | orchestrator | 2026-01-01 00:54:27.476572 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-01 00:54:27.476583 | orchestrator | Thursday 01 January 2026 00:50:27 +0000 (0:00:02.895) 0:01:00.438 ****** 2026-01-01 00:54:27.476594 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.476604 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.476615 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.476626 | orchestrator | 2026-01-01 00:54:27.476637 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-01 00:54:27.476648 | orchestrator | Thursday 01 January 2026 00:50:29 +0000 (0:00:02.170) 0:01:02.609 ****** 2026-01-01 00:54:27.476659 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.476669 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.476699 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.476710 | orchestrator | 2026-01-01 00:54:27.476721 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-01 00:54:27.476732 | orchestrator | Thursday 01 January 2026 00:50:30 +0000 (0:00:01.112) 0:01:03.721 ****** 2026-01-01 00:54:27.476743 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-01 00:54:27.476755 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-01 00:54:27.476774 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-01 00:54:27.476785 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-01 00:54:27.476796 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-01 00:54:27.476807 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-01 00:54:27.476818 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-01 00:54:27.476829 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-01 00:54:27.476840 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-01 00:54:27.476850 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-01 00:54:27.476861 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-01 00:54:27.476877 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-01 00:54:27.476888 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.476899 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.476910 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.476921 | orchestrator | 2026-01-01 00:54:27.476932 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-01 00:54:27.476943 | orchestrator | Thursday 01 January 2026 00:51:14 +0000 (0:00:43.632) 0:01:47.354 ****** 2026-01-01 00:54:27.476954 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.476965 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.476975 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.476986 | orchestrator | 2026-01-01 00:54:27.476997 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-01 00:54:27.477008 | orchestrator | Thursday 01 January 2026 00:51:14 +0000 (0:00:00.609) 0:01:47.963 ****** 2026-01-01 00:54:27.477019 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.477030 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.477040 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.477051 | orchestrator | 2026-01-01 00:54:27.477062 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-01 00:54:27.477073 | orchestrator | Thursday 01 January 2026 00:51:16 +0000 (0:00:01.333) 0:01:49.297 ****** 2026-01-01 00:54:27.477084 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.477095 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.477105 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.477116 | orchestrator | 2026-01-01 00:54:27.477133 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-01 00:54:27.477144 | orchestrator | Thursday 01 January 2026 00:51:17 +0000 (0:00:01.559) 0:01:50.857 ****** 2026-01-01 00:54:27.477155 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.477212 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.477226 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.477237 | orchestrator | 2026-01-01 00:54:27.477248 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-01 00:54:27.477259 | orchestrator | Thursday 01 January 2026 00:51:43 +0000 (0:00:25.362) 0:02:16.220 ****** 2026-01-01 00:54:27.477270 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.477281 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.477311 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.477322 | orchestrator | 2026-01-01 00:54:27.477333 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-01 00:54:27.477344 | orchestrator | Thursday 01 January 2026 00:51:44 +0000 (0:00:01.135) 0:02:17.355 ****** 2026-01-01 00:54:27.477355 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.477366 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.477377 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.477387 | orchestrator | 2026-01-01 00:54:27.477398 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-01 00:54:27.477409 | orchestrator | Thursday 01 January 2026 00:51:45 +0000 (0:00:00.827) 0:02:18.183 ****** 2026-01-01 00:54:27.477420 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.477431 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.477442 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.477453 | orchestrator | 2026-01-01 00:54:27.477464 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-01 00:54:27.477475 | orchestrator | Thursday 01 January 2026 00:51:45 +0000 (0:00:00.817) 0:02:19.001 ****** 2026-01-01 00:54:27.477486 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.477497 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.477508 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.477519 | orchestrator | 2026-01-01 00:54:27.477530 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-01 00:54:27.477541 | orchestrator | Thursday 01 January 2026 00:51:46 +0000 (0:00:01.008) 0:02:20.009 ****** 2026-01-01 00:54:27.477552 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.477563 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.477574 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.477585 | orchestrator | 2026-01-01 00:54:27.477596 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-01 00:54:27.477607 | orchestrator | Thursday 01 January 2026 00:51:47 +0000 (0:00:00.341) 0:02:20.350 ****** 2026-01-01 00:54:27.477618 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.477629 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.477640 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.477650 | orchestrator | 2026-01-01 00:54:27.477661 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-01 00:54:27.477690 | orchestrator | Thursday 01 January 2026 00:51:47 +0000 (0:00:00.690) 0:02:21.040 ****** 2026-01-01 00:54:27.477702 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.477713 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.477724 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.477734 | orchestrator | 2026-01-01 00:54:27.477745 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-01 00:54:27.477756 | orchestrator | Thursday 01 January 2026 00:51:48 +0000 (0:00:00.761) 0:02:21.802 ****** 2026-01-01 00:54:27.477767 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.477778 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.477789 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.477799 | orchestrator | 2026-01-01 00:54:27.477810 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-01 00:54:27.477821 | orchestrator | Thursday 01 January 2026 00:51:50 +0000 (0:00:01.657) 0:02:23.460 ****** 2026-01-01 00:54:27.477832 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:54:27.477843 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:54:27.477853 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:54:27.477864 | orchestrator | 2026-01-01 00:54:27.477875 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-01 00:54:27.477886 | orchestrator | Thursday 01 January 2026 00:51:51 +0000 (0:00:01.240) 0:02:24.701 ****** 2026-01-01 00:54:27.477897 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.477908 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.477918 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.477929 | orchestrator | 2026-01-01 00:54:27.477948 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-01 00:54:27.477959 | orchestrator | Thursday 01 January 2026 00:51:52 +0000 (0:00:00.419) 0:02:25.121 ****** 2026-01-01 00:54:27.477970 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.477981 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.477992 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.478002 | orchestrator | 2026-01-01 00:54:27.478040 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-01 00:54:27.478054 | orchestrator | Thursday 01 January 2026 00:51:52 +0000 (0:00:00.344) 0:02:25.465 ****** 2026-01-01 00:54:27.478065 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.478076 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.478087 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.478098 | orchestrator | 2026-01-01 00:54:27.478109 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-01 00:54:27.478120 | orchestrator | Thursday 01 January 2026 00:51:53 +0000 (0:00:00.973) 0:02:26.439 ****** 2026-01-01 00:54:27.478131 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.478142 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.478152 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.478163 | orchestrator | 2026-01-01 00:54:27.478175 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-01 00:54:27.478186 | orchestrator | Thursday 01 January 2026 00:51:54 +0000 (0:00:00.724) 0:02:27.164 ****** 2026-01-01 00:54:27.478197 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-01 00:54:27.478215 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-01 00:54:27.478227 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-01 00:54:27.478238 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-01 00:54:27.478249 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-01 00:54:27.478260 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-01 00:54:27.478271 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-01 00:54:27.478282 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-01 00:54:27.478293 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-01 00:54:27.478304 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-01 00:54:27.478314 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-01 00:54:27.478325 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-01 00:54:27.478336 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-01 00:54:27.478347 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-01 00:54:27.478358 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-01 00:54:27.478369 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-01 00:54:27.478380 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-01 00:54:27.478391 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-01 00:54:27.478402 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-01 00:54:27.478413 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-01 00:54:27.478432 | orchestrator | 2026-01-01 00:54:27.478443 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-01 00:54:27.478454 | orchestrator | 2026-01-01 00:54:27.478534 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-01 00:54:27.478556 | orchestrator | Thursday 01 January 2026 00:51:57 +0000 (0:00:03.113) 0:02:30.277 ****** 2026-01-01 00:54:27.478568 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:54:27.478579 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:54:27.478590 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:54:27.478601 | orchestrator | 2026-01-01 00:54:27.478612 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-01 00:54:27.478623 | orchestrator | Thursday 01 January 2026 00:51:57 +0000 (0:00:00.548) 0:02:30.826 ****** 2026-01-01 00:54:27.478634 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:54:27.478644 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:54:27.478655 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:54:27.478666 | orchestrator | 2026-01-01 00:54:27.478729 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-01 00:54:27.478741 | orchestrator | Thursday 01 January 2026 00:51:58 +0000 (0:00:00.634) 0:02:31.460 ****** 2026-01-01 00:54:27.478752 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:54:27.478763 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:54:27.478774 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:54:27.478785 | orchestrator | 2026-01-01 00:54:27.478796 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-01 00:54:27.478807 | orchestrator | Thursday 01 January 2026 00:51:58 +0000 (0:00:00.339) 0:02:31.800 ****** 2026-01-01 00:54:27.478818 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:54:27.478829 | orchestrator | 2026-01-01 00:54:27.478846 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-01 00:54:27.478857 | orchestrator | Thursday 01 January 2026 00:51:59 +0000 (0:00:00.697) 0:02:32.498 ****** 2026-01-01 00:54:27.478868 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.478879 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.478890 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.478901 | orchestrator | 2026-01-01 00:54:27.478911 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-01 00:54:27.478921 | orchestrator | Thursday 01 January 2026 00:51:59 +0000 (0:00:00.333) 0:02:32.831 ****** 2026-01-01 00:54:27.478930 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.478940 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.478950 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.478959 | orchestrator | 2026-01-01 00:54:27.478969 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-01 00:54:27.478979 | orchestrator | Thursday 01 January 2026 00:52:00 +0000 (0:00:00.335) 0:02:33.167 ****** 2026-01-01 00:54:27.478989 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.478998 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.479008 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.479017 | orchestrator | 2026-01-01 00:54:27.479027 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-01 00:54:27.479037 | orchestrator | Thursday 01 January 2026 00:52:00 +0000 (0:00:00.359) 0:02:33.527 ****** 2026-01-01 00:54:27.479047 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:54:27.479056 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:54:27.479066 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:54:27.479076 | orchestrator | 2026-01-01 00:54:27.479093 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-01 00:54:27.479104 | orchestrator | Thursday 01 January 2026 00:52:01 +0000 (0:00:00.934) 0:02:34.461 ****** 2026-01-01 00:54:27.479114 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:54:27.479123 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:54:27.479140 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:54:27.479150 | orchestrator | 2026-01-01 00:54:27.479160 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-01 00:54:27.479170 | orchestrator | Thursday 01 January 2026 00:52:02 +0000 (0:00:01.145) 0:02:35.607 ****** 2026-01-01 00:54:27.479180 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:54:27.479190 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:54:27.479199 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:54:27.479209 | orchestrator | 2026-01-01 00:54:27.479218 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-01 00:54:27.479228 | orchestrator | Thursday 01 January 2026 00:52:03 +0000 (0:00:01.190) 0:02:36.797 ****** 2026-01-01 00:54:27.479238 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:54:27.479248 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:54:27.479257 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:54:27.479267 | orchestrator | 2026-01-01 00:54:27.479277 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-01 00:54:27.479286 | orchestrator | 2026-01-01 00:54:27.479296 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-01 00:54:27.479306 | orchestrator | Thursday 01 January 2026 00:52:14 +0000 (0:00:11.117) 0:02:47.915 ****** 2026-01-01 00:54:27.479315 | orchestrator | ok: [testbed-manager] 2026-01-01 00:54:27.479325 | orchestrator | 2026-01-01 00:54:27.479335 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-01 00:54:27.479345 | orchestrator | Thursday 01 January 2026 00:52:15 +0000 (0:00:00.822) 0:02:48.738 ****** 2026-01-01 00:54:27.479354 | orchestrator | changed: [testbed-manager] 2026-01-01 00:54:27.479364 | orchestrator | 2026-01-01 00:54:27.479374 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-01 00:54:27.479383 | orchestrator | Thursday 01 January 2026 00:52:16 +0000 (0:00:00.453) 0:02:49.191 ****** 2026-01-01 00:54:27.479393 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-01 00:54:27.479403 | orchestrator | 2026-01-01 00:54:27.479413 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-01 00:54:27.479422 | orchestrator | Thursday 01 January 2026 00:52:16 +0000 (0:00:00.593) 0:02:49.784 ****** 2026-01-01 00:54:27.479432 | orchestrator | changed: [testbed-manager] 2026-01-01 00:54:27.479442 | orchestrator | 2026-01-01 00:54:27.479452 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-01 00:54:27.479461 | orchestrator | Thursday 01 January 2026 00:52:17 +0000 (0:00:00.938) 0:02:50.723 ****** 2026-01-01 00:54:27.479471 | orchestrator | changed: [testbed-manager] 2026-01-01 00:54:27.479481 | orchestrator | 2026-01-01 00:54:27.479491 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-01 00:54:27.479500 | orchestrator | Thursday 01 January 2026 00:52:18 +0000 (0:00:00.639) 0:02:51.363 ****** 2026-01-01 00:54:27.479510 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-01 00:54:27.479520 | orchestrator | 2026-01-01 00:54:27.479530 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-01 00:54:27.479571 | orchestrator | Thursday 01 January 2026 00:52:20 +0000 (0:00:01.734) 0:02:53.097 ****** 2026-01-01 00:54:27.479582 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-01 00:54:27.479592 | orchestrator | 2026-01-01 00:54:27.479602 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-01 00:54:27.479612 | orchestrator | Thursday 01 January 2026 00:52:20 +0000 (0:00:00.944) 0:02:54.041 ****** 2026-01-01 00:54:27.479622 | orchestrator | changed: [testbed-manager] 2026-01-01 00:54:27.479631 | orchestrator | 2026-01-01 00:54:27.479641 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-01 00:54:27.479651 | orchestrator | Thursday 01 January 2026 00:52:21 +0000 (0:00:00.576) 0:02:54.618 ****** 2026-01-01 00:54:27.479661 | orchestrator | changed: [testbed-manager] 2026-01-01 00:54:27.479711 | orchestrator | 2026-01-01 00:54:27.479723 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-01 00:54:27.479740 | orchestrator | 2026-01-01 00:54:27.479750 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-01 00:54:27.479760 | orchestrator | Thursday 01 January 2026 00:52:22 +0000 (0:00:00.730) 0:02:55.348 ****** 2026-01-01 00:54:27.479775 | orchestrator | ok: [testbed-manager] 2026-01-01 00:54:27.479785 | orchestrator | 2026-01-01 00:54:27.479794 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-01 00:54:27.479804 | orchestrator | Thursday 01 January 2026 00:52:22 +0000 (0:00:00.219) 0:02:55.568 ****** 2026-01-01 00:54:27.479814 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:54:27.479824 | orchestrator | 2026-01-01 00:54:27.479833 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-01 00:54:27.479843 | orchestrator | Thursday 01 January 2026 00:52:22 +0000 (0:00:00.236) 0:02:55.805 ****** 2026-01-01 00:54:27.479853 | orchestrator | ok: [testbed-manager] 2026-01-01 00:54:27.479863 | orchestrator | 2026-01-01 00:54:27.479872 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-01 00:54:27.479882 | orchestrator | Thursday 01 January 2026 00:52:23 +0000 (0:00:01.267) 0:02:57.073 ****** 2026-01-01 00:54:27.479892 | orchestrator | ok: [testbed-manager] 2026-01-01 00:54:27.479902 | orchestrator | 2026-01-01 00:54:27.479912 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-01 00:54:27.479921 | orchestrator | Thursday 01 January 2026 00:52:26 +0000 (0:00:02.060) 0:02:59.133 ****** 2026-01-01 00:54:27.479931 | orchestrator | changed: [testbed-manager] 2026-01-01 00:54:27.479941 | orchestrator | 2026-01-01 00:54:27.479951 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-01 00:54:27.479960 | orchestrator | Thursday 01 January 2026 00:52:28 +0000 (0:00:01.982) 0:03:01.115 ****** 2026-01-01 00:54:27.479970 | orchestrator | ok: [testbed-manager] 2026-01-01 00:54:27.479980 | orchestrator | 2026-01-01 00:54:27.479996 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-01 00:54:27.480006 | orchestrator | Thursday 01 January 2026 00:52:28 +0000 (0:00:00.521) 0:03:01.637 ****** 2026-01-01 00:54:27.480016 | orchestrator | changed: [testbed-manager] 2026-01-01 00:54:27.480026 | orchestrator | 2026-01-01 00:54:27.480036 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-01 00:54:27.480045 | orchestrator | Thursday 01 January 2026 00:52:37 +0000 (0:00:09.394) 0:03:11.031 ****** 2026-01-01 00:54:27.480055 | orchestrator | changed: [testbed-manager] 2026-01-01 00:54:27.480065 | orchestrator | 2026-01-01 00:54:27.480075 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-01 00:54:27.480084 | orchestrator | Thursday 01 January 2026 00:52:56 +0000 (0:00:18.279) 0:03:29.310 ****** 2026-01-01 00:54:27.480094 | orchestrator | ok: [testbed-manager] 2026-01-01 00:54:27.480104 | orchestrator | 2026-01-01 00:54:27.480114 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-01 00:54:27.480123 | orchestrator | 2026-01-01 00:54:27.480133 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-01 00:54:27.480143 | orchestrator | Thursday 01 January 2026 00:52:57 +0000 (0:00:00.812) 0:03:30.123 ****** 2026-01-01 00:54:27.480152 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.480162 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.480172 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.480181 | orchestrator | 2026-01-01 00:54:27.480191 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-01 00:54:27.480201 | orchestrator | Thursday 01 January 2026 00:52:57 +0000 (0:00:00.309) 0:03:30.432 ****** 2026-01-01 00:54:27.480211 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.480221 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.480231 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.480239 | orchestrator | 2026-01-01 00:54:27.480247 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-01 00:54:27.480260 | orchestrator | Thursday 01 January 2026 00:52:57 +0000 (0:00:00.303) 0:03:30.735 ****** 2026-01-01 00:54:27.480268 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:54:27.480276 | orchestrator | 2026-01-01 00:54:27.480284 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-01 00:54:27.480292 | orchestrator | Thursday 01 January 2026 00:52:58 +0000 (0:00:00.711) 0:03:31.447 ****** 2026-01-01 00:54:27.480300 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-01 00:54:27.480308 | orchestrator | 2026-01-01 00:54:27.480316 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-01 00:54:27.480324 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:00.740) 0:03:32.188 ****** 2026-01-01 00:54:27.480332 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:54:27.480340 | orchestrator | 2026-01-01 00:54:27.480348 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-01 00:54:27.480356 | orchestrator | Thursday 01 January 2026 00:53:00 +0000 (0:00:00.946) 0:03:33.135 ****** 2026-01-01 00:54:27.480364 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.480372 | orchestrator | 2026-01-01 00:54:27.480380 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-01 00:54:27.480388 | orchestrator | Thursday 01 January 2026 00:53:00 +0000 (0:00:00.143) 0:03:33.279 ****** 2026-01-01 00:54:27.480396 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:54:27.480404 | orchestrator | 2026-01-01 00:54:27.480412 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-01 00:54:27.480420 | orchestrator | Thursday 01 January 2026 00:53:01 +0000 (0:00:01.298) 0:03:34.577 ****** 2026-01-01 00:54:27.480428 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.480435 | orchestrator | 2026-01-01 00:54:27.480443 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-01 00:54:27.480451 | orchestrator | Thursday 01 January 2026 00:53:01 +0000 (0:00:00.108) 0:03:34.685 ****** 2026-01-01 00:54:27.480459 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.480467 | orchestrator | 2026-01-01 00:54:27.480475 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-01 00:54:27.480483 | orchestrator | Thursday 01 January 2026 00:53:01 +0000 (0:00:00.206) 0:03:34.891 ****** 2026-01-01 00:54:27.480491 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.480499 | orchestrator | 2026-01-01 00:54:27.480511 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-01 00:54:27.480519 | orchestrator | Thursday 01 January 2026 00:53:01 +0000 (0:00:00.126) 0:03:35.018 ****** 2026-01-01 00:54:27.480527 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.480535 | orchestrator | 2026-01-01 00:54:27.480543 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-01 00:54:27.480551 | orchestrator | Thursday 01 January 2026 00:53:02 +0000 (0:00:00.189) 0:03:35.208 ****** 2026-01-01 00:54:27.480559 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-01 00:54:27.480566 | orchestrator | 2026-01-01 00:54:27.480574 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-01 00:54:27.480583 | orchestrator | Thursday 01 January 2026 00:53:08 +0000 (0:00:06.093) 0:03:41.301 ****** 2026-01-01 00:54:27.480591 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-01 00:54:27.480598 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-01 00:54:27.480607 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-01 00:54:27.480615 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-01 00:54:27.480623 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-01 00:54:27.480630 | orchestrator | 2026-01-01 00:54:27.480638 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-01 00:54:27.480651 | orchestrator | Thursday 01 January 2026 00:53:54 +0000 (0:00:46.338) 0:04:27.639 ****** 2026-01-01 00:54:27.480664 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:54:27.480691 | orchestrator | 2026-01-01 00:54:27.480699 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-01 00:54:27.480707 | orchestrator | Thursday 01 January 2026 00:53:55 +0000 (0:00:01.250) 0:04:28.889 ****** 2026-01-01 00:54:27.480715 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-01 00:54:27.480723 | orchestrator | 2026-01-01 00:54:27.480731 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-01 00:54:27.480739 | orchestrator | Thursday 01 January 2026 00:53:57 +0000 (0:00:01.696) 0:04:30.586 ****** 2026-01-01 00:54:27.480747 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-01 00:54:27.480755 | orchestrator | 2026-01-01 00:54:27.480763 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-01 00:54:27.480770 | orchestrator | Thursday 01 January 2026 00:53:58 +0000 (0:00:01.421) 0:04:32.007 ****** 2026-01-01 00:54:27.480778 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.480786 | orchestrator | 2026-01-01 00:54:27.480794 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-01 00:54:27.480802 | orchestrator | Thursday 01 January 2026 00:53:59 +0000 (0:00:00.130) 0:04:32.137 ****** 2026-01-01 00:54:27.480810 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-01 00:54:27.480818 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-01 00:54:27.480826 | orchestrator | 2026-01-01 00:54:27.480834 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-01 00:54:27.480842 | orchestrator | Thursday 01 January 2026 00:54:01 +0000 (0:00:02.075) 0:04:34.213 ****** 2026-01-01 00:54:27.480850 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.480858 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.480866 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.480874 | orchestrator | 2026-01-01 00:54:27.480882 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-01 00:54:27.480890 | orchestrator | Thursday 01 January 2026 00:54:01 +0000 (0:00:00.347) 0:04:34.561 ****** 2026-01-01 00:54:27.480898 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.480906 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.480914 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.480922 | orchestrator | 2026-01-01 00:54:27.480930 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-01 00:54:27.480938 | orchestrator | 2026-01-01 00:54:27.480946 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-01 00:54:27.480954 | orchestrator | Thursday 01 January 2026 00:54:02 +0000 (0:00:01.236) 0:04:35.797 ****** 2026-01-01 00:54:27.480962 | orchestrator | ok: [testbed-manager] 2026-01-01 00:54:27.480969 | orchestrator | 2026-01-01 00:54:27.480977 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-01 00:54:27.480985 | orchestrator | Thursday 01 January 2026 00:54:02 +0000 (0:00:00.187) 0:04:35.984 ****** 2026-01-01 00:54:27.480993 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:54:27.481001 | orchestrator | 2026-01-01 00:54:27.481009 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-01 00:54:27.481017 | orchestrator | Thursday 01 January 2026 00:54:03 +0000 (0:00:00.209) 0:04:36.194 ****** 2026-01-01 00:54:27.481025 | orchestrator | changed: [testbed-manager] 2026-01-01 00:54:27.481033 | orchestrator | 2026-01-01 00:54:27.481041 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-01 00:54:27.481049 | orchestrator | 2026-01-01 00:54:27.481057 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-01 00:54:27.481065 | orchestrator | Thursday 01 January 2026 00:54:08 +0000 (0:00:05.826) 0:04:42.020 ****** 2026-01-01 00:54:27.481078 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:54:27.481086 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:54:27.481094 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:54:27.481102 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:54:27.481110 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:54:27.481118 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:54:27.481125 | orchestrator | 2026-01-01 00:54:27.481133 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-01 00:54:27.481141 | orchestrator | Thursday 01 January 2026 00:54:10 +0000 (0:00:01.112) 0:04:43.133 ****** 2026-01-01 00:54:27.481149 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-01 00:54:27.481162 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-01 00:54:27.481170 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-01 00:54:27.481178 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-01 00:54:27.481186 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-01 00:54:27.481193 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-01 00:54:27.481201 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-01 00:54:27.481209 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-01 00:54:27.481217 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-01 00:54:27.481225 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-01 00:54:27.481233 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-01 00:54:27.481241 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-01 00:54:27.481254 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-01 00:54:27.481262 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-01 00:54:27.481270 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-01 00:54:27.481278 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-01 00:54:27.481286 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-01 00:54:27.481293 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-01 00:54:27.481301 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-01 00:54:27.481309 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-01 00:54:27.481317 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-01 00:54:27.481325 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-01 00:54:27.481333 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-01 00:54:27.481341 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-01 00:54:27.481348 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-01 00:54:27.481356 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-01 00:54:27.481364 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-01 00:54:27.481372 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-01 00:54:27.481380 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-01 00:54:27.481393 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-01 00:54:27.481401 | orchestrator | 2026-01-01 00:54:27.481409 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-01 00:54:27.481417 | orchestrator | Thursday 01 January 2026 00:54:24 +0000 (0:00:14.372) 0:04:57.505 ****** 2026-01-01 00:54:27.481425 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.481432 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.481440 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.481448 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.481456 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.481464 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.481472 | orchestrator | 2026-01-01 00:54:27.481480 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-01 00:54:27.481488 | orchestrator | Thursday 01 January 2026 00:54:25 +0000 (0:00:00.708) 0:04:58.213 ****** 2026-01-01 00:54:27.481496 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:54:27.481504 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:54:27.481512 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:54:27.481520 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:54:27.481528 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:54:27.481535 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:54:27.481543 | orchestrator | 2026-01-01 00:54:27.481551 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:54:27.481559 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:54:27.481568 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-01 00:54:27.481576 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-01 00:54:27.481588 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-01 00:54:27.481596 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-01 00:54:27.481604 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-01 00:54:27.481612 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-01 00:54:27.481620 | orchestrator | 2026-01-01 00:54:27.481628 | orchestrator | 2026-01-01 00:54:27.481636 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:54:27.481644 | orchestrator | Thursday 01 January 2026 00:54:25 +0000 (0:00:00.843) 0:04:59.057 ****** 2026-01-01 00:54:27.481652 | orchestrator | =============================================================================== 2026-01-01 00:54:27.481660 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 46.34s 2026-01-01 00:54:27.481668 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.63s 2026-01-01 00:54:27.481692 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.36s 2026-01-01 00:54:27.481705 | orchestrator | kubectl : Install required packages ------------------------------------ 18.28s 2026-01-01 00:54:27.481713 | orchestrator | Manage labels ---------------------------------------------------------- 14.37s 2026-01-01 00:54:27.481721 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.12s 2026-01-01 00:54:27.481729 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.39s 2026-01-01 00:54:27.481743 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.57s 2026-01-01 00:54:27.481751 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.09s 2026-01-01 00:54:27.481758 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.83s 2026-01-01 00:54:27.481766 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.82s 2026-01-01 00:54:27.481775 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.11s 2026-01-01 00:54:27.481783 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.93s 2026-01-01 00:54:27.481791 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.90s 2026-01-01 00:54:27.481798 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.84s 2026-01-01 00:54:27.481806 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.79s 2026-01-01 00:54:27.481814 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 2.65s 2026-01-01 00:54:27.481822 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.42s 2026-01-01 00:54:27.481830 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.41s 2026-01-01 00:54:27.481838 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.34s 2026-01-01 00:54:27.481846 | orchestrator | 2026-01-01 00:54:27 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:27.481948 | orchestrator | 2026-01-01 00:54:27 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:27.481959 | orchestrator | 2026-01-01 00:54:27 | INFO  | Task 6999abec-7287-4cf9-b94a-ba09c085c10b is in state STARTED 2026-01-01 00:54:27.482992 | orchestrator | 2026-01-01 00:54:27 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:27.486118 | orchestrator | 2026-01-01 00:54:27 | INFO  | Task 322e499b-b1a2-4a62-ad4d-36e83486edd2 is in state STARTED 2026-01-01 00:54:27.487330 | orchestrator | 2026-01-01 00:54:27 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:27.487709 | orchestrator | 2026-01-01 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:30.535423 | orchestrator | 2026-01-01 00:54:30 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:30.537992 | orchestrator | 2026-01-01 00:54:30 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:30.539371 | orchestrator | 2026-01-01 00:54:30 | INFO  | Task 6999abec-7287-4cf9-b94a-ba09c085c10b is in state STARTED 2026-01-01 00:54:30.542084 | orchestrator | 2026-01-01 00:54:30 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:30.544167 | orchestrator | 2026-01-01 00:54:30 | INFO  | Task 322e499b-b1a2-4a62-ad4d-36e83486edd2 is in state STARTED 2026-01-01 00:54:30.546380 | orchestrator | 2026-01-01 00:54:30 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:30.546532 | orchestrator | 2026-01-01 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:33.600218 | orchestrator | 2026-01-01 00:54:33 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:33.603071 | orchestrator | 2026-01-01 00:54:33 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:33.608051 | orchestrator | 2026-01-01 00:54:33 | INFO  | Task 6999abec-7287-4cf9-b94a-ba09c085c10b is in state STARTED 2026-01-01 00:54:33.608109 | orchestrator | 2026-01-01 00:54:33 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:33.611798 | orchestrator | 2026-01-01 00:54:33 | INFO  | Task 322e499b-b1a2-4a62-ad4d-36e83486edd2 is in state STARTED 2026-01-01 00:54:33.614163 | orchestrator | 2026-01-01 00:54:33 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:33.614211 | orchestrator | 2026-01-01 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:36.661059 | orchestrator | 2026-01-01 00:54:36 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:36.661157 | orchestrator | 2026-01-01 00:54:36 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:36.661173 | orchestrator | 2026-01-01 00:54:36 | INFO  | Task 6999abec-7287-4cf9-b94a-ba09c085c10b is in state STARTED 2026-01-01 00:54:36.661180 | orchestrator | 2026-01-01 00:54:36 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:36.661187 | orchestrator | 2026-01-01 00:54:36 | INFO  | Task 322e499b-b1a2-4a62-ad4d-36e83486edd2 is in state SUCCESS 2026-01-01 00:54:36.662476 | orchestrator | 2026-01-01 00:54:36 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:36.662504 | orchestrator | 2026-01-01 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:39.699892 | orchestrator | 2026-01-01 00:54:39 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:39.702320 | orchestrator | 2026-01-01 00:54:39 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:39.704277 | orchestrator | 2026-01-01 00:54:39 | INFO  | Task 6999abec-7287-4cf9-b94a-ba09c085c10b is in state SUCCESS 2026-01-01 00:54:39.705952 | orchestrator | 2026-01-01 00:54:39 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:39.708357 | orchestrator | 2026-01-01 00:54:39 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:39.708398 | orchestrator | 2026-01-01 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:42.760500 | orchestrator | 2026-01-01 00:54:42 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:42.760870 | orchestrator | 2026-01-01 00:54:42 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:42.763187 | orchestrator | 2026-01-01 00:54:42 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:42.764946 | orchestrator | 2026-01-01 00:54:42 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:42.764992 | orchestrator | 2026-01-01 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:45.811009 | orchestrator | 2026-01-01 00:54:45 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:45.813644 | orchestrator | 2026-01-01 00:54:45 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:45.816756 | orchestrator | 2026-01-01 00:54:45 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:45.819204 | orchestrator | 2026-01-01 00:54:45 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:45.819248 | orchestrator | 2026-01-01 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:48.865190 | orchestrator | 2026-01-01 00:54:48 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:48.865586 | orchestrator | 2026-01-01 00:54:48 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:48.866736 | orchestrator | 2026-01-01 00:54:48 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:48.868286 | orchestrator | 2026-01-01 00:54:48 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:48.868342 | orchestrator | 2026-01-01 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:51.899710 | orchestrator | 2026-01-01 00:54:51 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:51.900894 | orchestrator | 2026-01-01 00:54:51 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:51.902347 | orchestrator | 2026-01-01 00:54:51 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:51.903866 | orchestrator | 2026-01-01 00:54:51 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:51.903887 | orchestrator | 2026-01-01 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:54.936026 | orchestrator | 2026-01-01 00:54:54 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:54.936131 | orchestrator | 2026-01-01 00:54:54 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:54.937886 | orchestrator | 2026-01-01 00:54:54 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:54.938409 | orchestrator | 2026-01-01 00:54:54 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:54.939438 | orchestrator | 2026-01-01 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:57.971786 | orchestrator | 2026-01-01 00:54:57 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:54:57.971880 | orchestrator | 2026-01-01 00:54:57 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:54:57.972485 | orchestrator | 2026-01-01 00:54:57 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:54:57.973554 | orchestrator | 2026-01-01 00:54:57 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:54:57.973589 | orchestrator | 2026-01-01 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:01.026142 | orchestrator | 2026-01-01 00:55:01 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:01.028189 | orchestrator | 2026-01-01 00:55:01 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:01.029231 | orchestrator | 2026-01-01 00:55:01 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:01.031117 | orchestrator | 2026-01-01 00:55:01 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:01.031145 | orchestrator | 2026-01-01 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:04.068620 | orchestrator | 2026-01-01 00:55:04 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:04.070426 | orchestrator | 2026-01-01 00:55:04 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:04.072224 | orchestrator | 2026-01-01 00:55:04 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:04.073998 | orchestrator | 2026-01-01 00:55:04 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:04.074079 | orchestrator | 2026-01-01 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:07.109630 | orchestrator | 2026-01-01 00:55:07 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:07.110467 | orchestrator | 2026-01-01 00:55:07 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:07.111777 | orchestrator | 2026-01-01 00:55:07 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:07.112614 | orchestrator | 2026-01-01 00:55:07 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:07.112916 | orchestrator | 2026-01-01 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:10.151175 | orchestrator | 2026-01-01 00:55:10 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:10.154146 | orchestrator | 2026-01-01 00:55:10 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:10.156961 | orchestrator | 2026-01-01 00:55:10 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:10.160366 | orchestrator | 2026-01-01 00:55:10 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:10.160412 | orchestrator | 2026-01-01 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:13.201408 | orchestrator | 2026-01-01 00:55:13 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:13.202148 | orchestrator | 2026-01-01 00:55:13 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:13.203558 | orchestrator | 2026-01-01 00:55:13 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:13.204882 | orchestrator | 2026-01-01 00:55:13 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:13.204905 | orchestrator | 2026-01-01 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:16.241427 | orchestrator | 2026-01-01 00:55:16 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:16.241804 | orchestrator | 2026-01-01 00:55:16 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:16.244286 | orchestrator | 2026-01-01 00:55:16 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:16.247335 | orchestrator | 2026-01-01 00:55:16 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:16.247355 | orchestrator | 2026-01-01 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:19.293102 | orchestrator | 2026-01-01 00:55:19 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:19.295951 | orchestrator | 2026-01-01 00:55:19 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:19.297896 | orchestrator | 2026-01-01 00:55:19 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:19.300510 | orchestrator | 2026-01-01 00:55:19 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:19.300608 | orchestrator | 2026-01-01 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:22.352754 | orchestrator | 2026-01-01 00:55:22 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:22.357083 | orchestrator | 2026-01-01 00:55:22 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:22.359029 | orchestrator | 2026-01-01 00:55:22 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:22.360460 | orchestrator | 2026-01-01 00:55:22 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:22.360526 | orchestrator | 2026-01-01 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:25.430270 | orchestrator | 2026-01-01 00:55:25 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:25.435349 | orchestrator | 2026-01-01 00:55:25 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:25.437461 | orchestrator | 2026-01-01 00:55:25 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:25.439398 | orchestrator | 2026-01-01 00:55:25 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:25.439609 | orchestrator | 2026-01-01 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:28.479319 | orchestrator | 2026-01-01 00:55:28 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:28.480944 | orchestrator | 2026-01-01 00:55:28 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:28.481258 | orchestrator | 2026-01-01 00:55:28 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:28.483472 | orchestrator | 2026-01-01 00:55:28 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:28.483506 | orchestrator | 2026-01-01 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:31.522275 | orchestrator | 2026-01-01 00:55:31 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:31.523416 | orchestrator | 2026-01-01 00:55:31 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:31.524820 | orchestrator | 2026-01-01 00:55:31 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:31.526193 | orchestrator | 2026-01-01 00:55:31 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:31.526227 | orchestrator | 2026-01-01 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:34.572286 | orchestrator | 2026-01-01 00:55:34 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:34.573242 | orchestrator | 2026-01-01 00:55:34 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:34.574780 | orchestrator | 2026-01-01 00:55:34 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:34.575672 | orchestrator | 2026-01-01 00:55:34 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:34.575707 | orchestrator | 2026-01-01 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:37.602594 | orchestrator | 2026-01-01 00:55:37 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:37.604212 | orchestrator | 2026-01-01 00:55:37 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:37.607230 | orchestrator | 2026-01-01 00:55:37 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:37.609680 | orchestrator | 2026-01-01 00:55:37 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:37.609736 | orchestrator | 2026-01-01 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:40.646328 | orchestrator | 2026-01-01 00:55:40 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:40.651151 | orchestrator | 2026-01-01 00:55:40 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:40.653354 | orchestrator | 2026-01-01 00:55:40 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:40.654773 | orchestrator | 2026-01-01 00:55:40 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:40.654802 | orchestrator | 2026-01-01 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:43.703975 | orchestrator | 2026-01-01 00:55:43 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:43.705460 | orchestrator | 2026-01-01 00:55:43 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:43.707984 | orchestrator | 2026-01-01 00:55:43 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:43.709129 | orchestrator | 2026-01-01 00:55:43 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:43.709149 | orchestrator | 2026-01-01 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:46.747724 | orchestrator | 2026-01-01 00:55:46 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:46.748042 | orchestrator | 2026-01-01 00:55:46 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:46.752458 | orchestrator | 2026-01-01 00:55:46 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:46.753796 | orchestrator | 2026-01-01 00:55:46 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:46.754147 | orchestrator | 2026-01-01 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:49.797254 | orchestrator | 2026-01-01 00:55:49 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:49.797485 | orchestrator | 2026-01-01 00:55:49 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:49.798567 | orchestrator | 2026-01-01 00:55:49 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:49.799623 | orchestrator | 2026-01-01 00:55:49 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:49.799680 | orchestrator | 2026-01-01 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:52.846761 | orchestrator | 2026-01-01 00:55:52 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:52.848271 | orchestrator | 2026-01-01 00:55:52 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:52.850239 | orchestrator | 2026-01-01 00:55:52 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:52.852216 | orchestrator | 2026-01-01 00:55:52 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:52.852260 | orchestrator | 2026-01-01 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:55.894532 | orchestrator | 2026-01-01 00:55:55 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:55.895818 | orchestrator | 2026-01-01 00:55:55 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:55.896883 | orchestrator | 2026-01-01 00:55:55 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:55.900550 | orchestrator | 2026-01-01 00:55:55 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:55.900602 | orchestrator | 2026-01-01 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:58.942181 | orchestrator | 2026-01-01 00:55:58 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:55:58.942752 | orchestrator | 2026-01-01 00:55:58 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:55:58.943806 | orchestrator | 2026-01-01 00:55:58 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:55:58.945156 | orchestrator | 2026-01-01 00:55:58 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:55:58.945247 | orchestrator | 2026-01-01 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:01.998134 | orchestrator | 2026-01-01 00:56:01 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:02.005162 | orchestrator | 2026-01-01 00:56:02 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:56:02.007085 | orchestrator | 2026-01-01 00:56:02 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:02.008107 | orchestrator | 2026-01-01 00:56:02 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:02.009031 | orchestrator | 2026-01-01 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:05.079567 | orchestrator | 2026-01-01 00:56:05 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:05.081958 | orchestrator | 2026-01-01 00:56:05 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:56:05.081991 | orchestrator | 2026-01-01 00:56:05 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:05.083142 | orchestrator | 2026-01-01 00:56:05 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:05.083167 | orchestrator | 2026-01-01 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:08.129324 | orchestrator | 2026-01-01 00:56:08 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:08.130440 | orchestrator | 2026-01-01 00:56:08 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:56:08.132856 | orchestrator | 2026-01-01 00:56:08 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:08.135255 | orchestrator | 2026-01-01 00:56:08 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:08.135566 | orchestrator | 2026-01-01 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:11.188919 | orchestrator | 2026-01-01 00:56:11 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:11.193046 | orchestrator | 2026-01-01 00:56:11 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:56:11.193087 | orchestrator | 2026-01-01 00:56:11 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:11.193097 | orchestrator | 2026-01-01 00:56:11 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:11.193106 | orchestrator | 2026-01-01 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:14.228906 | orchestrator | 2026-01-01 00:56:14 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:14.229723 | orchestrator | 2026-01-01 00:56:14 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:56:14.230531 | orchestrator | 2026-01-01 00:56:14 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:14.231469 | orchestrator | 2026-01-01 00:56:14 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:14.231528 | orchestrator | 2026-01-01 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:17.263758 | orchestrator | 2026-01-01 00:56:17 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:17.264207 | orchestrator | 2026-01-01 00:56:17 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state STARTED 2026-01-01 00:56:17.265047 | orchestrator | 2026-01-01 00:56:17 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:17.266274 | orchestrator | 2026-01-01 00:56:17 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:17.266351 | orchestrator | 2026-01-01 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:20.302895 | orchestrator | 2026-01-01 00:56:20 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:20.310870 | orchestrator | 2026-01-01 00:56:20.310981 | orchestrator | 2026-01-01 00:56:20.310998 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-01 00:56:20.311009 | orchestrator | 2026-01-01 00:56:20.311020 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-01 00:56:20.311030 | orchestrator | Thursday 01 January 2026 00:54:31 +0000 (0:00:00.187) 0:00:00.187 ****** 2026-01-01 00:56:20.311041 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-01 00:56:20.311051 | orchestrator | 2026-01-01 00:56:20.311061 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-01 00:56:20.311071 | orchestrator | Thursday 01 January 2026 00:54:32 +0000 (0:00:00.829) 0:00:01.016 ****** 2026-01-01 00:56:20.311082 | orchestrator | changed: [testbed-manager] 2026-01-01 00:56:20.311092 | orchestrator | 2026-01-01 00:56:20.311102 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-01 00:56:20.311112 | orchestrator | Thursday 01 January 2026 00:54:33 +0000 (0:00:01.401) 0:00:02.418 ****** 2026-01-01 00:56:20.311122 | orchestrator | changed: [testbed-manager] 2026-01-01 00:56:20.311132 | orchestrator | 2026-01-01 00:56:20.311142 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:56:20.311152 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:56:20.311163 | orchestrator | 2026-01-01 00:56:20.311173 | orchestrator | 2026-01-01 00:56:20.311183 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:56:20.311193 | orchestrator | Thursday 01 January 2026 00:54:34 +0000 (0:00:00.504) 0:00:02.922 ****** 2026-01-01 00:56:20.311203 | orchestrator | =============================================================================== 2026-01-01 00:56:20.311212 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.40s 2026-01-01 00:56:20.311222 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.83s 2026-01-01 00:56:20.311232 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2026-01-01 00:56:20.311242 | orchestrator | 2026-01-01 00:56:20.311251 | orchestrator | 2026-01-01 00:56:20.311261 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-01 00:56:20.311271 | orchestrator | 2026-01-01 00:56:20.311281 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-01 00:56:20.311291 | orchestrator | Thursday 01 January 2026 00:54:31 +0000 (0:00:00.175) 0:00:00.176 ****** 2026-01-01 00:56:20.311300 | orchestrator | ok: [testbed-manager] 2026-01-01 00:56:20.311312 | orchestrator | 2026-01-01 00:56:20.311321 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-01 00:56:20.311331 | orchestrator | Thursday 01 January 2026 00:54:32 +0000 (0:00:00.738) 0:00:00.914 ****** 2026-01-01 00:56:20.311341 | orchestrator | ok: [testbed-manager] 2026-01-01 00:56:20.311351 | orchestrator | 2026-01-01 00:56:20.311366 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-01 00:56:20.311408 | orchestrator | Thursday 01 January 2026 00:54:32 +0000 (0:00:00.719) 0:00:01.633 ****** 2026-01-01 00:56:20.311426 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-01 00:56:20.311443 | orchestrator | 2026-01-01 00:56:20.311456 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-01 00:56:20.311472 | orchestrator | Thursday 01 January 2026 00:54:33 +0000 (0:00:00.889) 0:00:02.522 ****** 2026-01-01 00:56:20.311488 | orchestrator | changed: [testbed-manager] 2026-01-01 00:56:20.311504 | orchestrator | 2026-01-01 00:56:20.311521 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-01 00:56:20.311537 | orchestrator | Thursday 01 January 2026 00:54:35 +0000 (0:00:01.635) 0:00:04.158 ****** 2026-01-01 00:56:20.311554 | orchestrator | changed: [testbed-manager] 2026-01-01 00:56:20.311570 | orchestrator | 2026-01-01 00:56:20.311585 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-01 00:56:20.311595 | orchestrator | Thursday 01 January 2026 00:54:35 +0000 (0:00:00.625) 0:00:04.784 ****** 2026-01-01 00:56:20.311604 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-01 00:56:20.311639 | orchestrator | 2026-01-01 00:56:20.311654 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-01 00:56:20.311663 | orchestrator | Thursday 01 January 2026 00:54:37 +0000 (0:00:01.651) 0:00:06.436 ****** 2026-01-01 00:56:20.311673 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-01 00:56:20.311683 | orchestrator | 2026-01-01 00:56:20.311692 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-01 00:56:20.311702 | orchestrator | Thursday 01 January 2026 00:54:38 +0000 (0:00:00.857) 0:00:07.294 ****** 2026-01-01 00:56:20.311711 | orchestrator | ok: [testbed-manager] 2026-01-01 00:56:20.311721 | orchestrator | 2026-01-01 00:56:20.311730 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-01 00:56:20.311740 | orchestrator | Thursday 01 January 2026 00:54:38 +0000 (0:00:00.441) 0:00:07.735 ****** 2026-01-01 00:56:20.311749 | orchestrator | ok: [testbed-manager] 2026-01-01 00:56:20.311760 | orchestrator | 2026-01-01 00:56:20.311777 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:56:20.311793 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:56:20.311809 | orchestrator | 2026-01-01 00:56:20.311825 | orchestrator | 2026-01-01 00:56:20.311840 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:56:20.311868 | orchestrator | Thursday 01 January 2026 00:54:39 +0000 (0:00:00.335) 0:00:08.070 ****** 2026-01-01 00:56:20.311886 | orchestrator | =============================================================================== 2026-01-01 00:56:20.311904 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.65s 2026-01-01 00:56:20.311918 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.64s 2026-01-01 00:56:20.311927 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.89s 2026-01-01 00:56:20.311958 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.86s 2026-01-01 00:56:20.311968 | orchestrator | Get home directory of operator user ------------------------------------- 0.74s 2026-01-01 00:56:20.311978 | orchestrator | Create .kube directory -------------------------------------------------- 0.72s 2026-01-01 00:56:20.311987 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.63s 2026-01-01 00:56:20.311997 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2026-01-01 00:56:20.312007 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.34s 2026-01-01 00:56:20.312016 | orchestrator | 2026-01-01 00:56:20.312026 | orchestrator | 2026-01-01 00:56:20.312036 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-01 00:56:20.312045 | orchestrator | 2026-01-01 00:56:20.312055 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-01 00:56:20.312074 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:00.207) 0:00:00.208 ****** 2026-01-01 00:56:20.312084 | orchestrator | ok: [localhost] => { 2026-01-01 00:56:20.312095 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-01 00:56:20.312107 | orchestrator | } 2026-01-01 00:56:20.312123 | orchestrator | 2026-01-01 00:56:20.312146 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-01 00:56:20.312165 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:00.072) 0:00:00.280 ****** 2026-01-01 00:56:20.312182 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-01 00:56:20.312199 | orchestrator | ...ignoring 2026-01-01 00:56:20.312213 | orchestrator | 2026-01-01 00:56:20.312229 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-01 00:56:20.312246 | orchestrator | Thursday 01 January 2026 00:53:02 +0000 (0:00:03.357) 0:00:03.638 ****** 2026-01-01 00:56:20.312261 | orchestrator | skipping: [localhost] 2026-01-01 00:56:20.312279 | orchestrator | 2026-01-01 00:56:20.312295 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-01 00:56:20.312312 | orchestrator | Thursday 01 January 2026 00:53:02 +0000 (0:00:00.122) 0:00:03.760 ****** 2026-01-01 00:56:20.312323 | orchestrator | ok: [localhost] 2026-01-01 00:56:20.312333 | orchestrator | 2026-01-01 00:56:20.312342 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:56:20.312352 | orchestrator | 2026-01-01 00:56:20.312362 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:56:20.312371 | orchestrator | Thursday 01 January 2026 00:53:02 +0000 (0:00:00.162) 0:00:03.923 ****** 2026-01-01 00:56:20.312381 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:56:20.312391 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:56:20.312400 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:56:20.312410 | orchestrator | 2026-01-01 00:56:20.312420 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:56:20.312429 | orchestrator | Thursday 01 January 2026 00:53:03 +0000 (0:00:00.369) 0:00:04.293 ****** 2026-01-01 00:56:20.312439 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-01 00:56:20.312449 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-01 00:56:20.312459 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-01 00:56:20.312468 | orchestrator | 2026-01-01 00:56:20.312478 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-01 00:56:20.312487 | orchestrator | 2026-01-01 00:56:20.312497 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-01 00:56:20.312507 | orchestrator | Thursday 01 January 2026 00:53:03 +0000 (0:00:00.521) 0:00:04.814 ****** 2026-01-01 00:56:20.312517 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:56:20.312526 | orchestrator | 2026-01-01 00:56:20.312536 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-01 00:56:20.312546 | orchestrator | Thursday 01 January 2026 00:53:04 +0000 (0:00:00.634) 0:00:05.449 ****** 2026-01-01 00:56:20.312555 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:56:20.312565 | orchestrator | 2026-01-01 00:56:20.312574 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-01 00:56:20.312584 | orchestrator | Thursday 01 January 2026 00:53:05 +0000 (0:00:01.003) 0:00:06.452 ****** 2026-01-01 00:56:20.312594 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:56:20.312604 | orchestrator | 2026-01-01 00:56:20.312682 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-01 00:56:20.312695 | orchestrator | Thursday 01 January 2026 00:53:05 +0000 (0:00:00.278) 0:00:06.731 ****** 2026-01-01 00:56:20.312705 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:56:20.312724 | orchestrator | 2026-01-01 00:56:20.312734 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-01 00:56:20.312744 | orchestrator | Thursday 01 January 2026 00:53:05 +0000 (0:00:00.273) 0:00:07.005 ****** 2026-01-01 00:56:20.312754 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:56:20.312763 | orchestrator | 2026-01-01 00:56:20.312773 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-01 00:56:20.312783 | orchestrator | Thursday 01 January 2026 00:53:06 +0000 (0:00:00.928) 0:00:07.933 ****** 2026-01-01 00:56:20.312793 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:56:20.312803 | orchestrator | 2026-01-01 00:56:20.312819 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-01 00:56:20.312829 | orchestrator | Thursday 01 January 2026 00:53:07 +0000 (0:00:00.962) 0:00:08.896 ****** 2026-01-01 00:56:20.312839 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:56:20.312849 | orchestrator | 2026-01-01 00:56:20.312859 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-01 00:56:20.312879 | orchestrator | Thursday 01 January 2026 00:53:08 +0000 (0:00:00.647) 0:00:09.543 ****** 2026-01-01 00:56:20.312890 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:56:20.312899 | orchestrator | 2026-01-01 00:56:20.312909 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-01 00:56:20.312919 | orchestrator | Thursday 01 January 2026 00:53:09 +0000 (0:00:00.879) 0:00:10.422 ****** 2026-01-01 00:56:20.312929 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:56:20.312938 | orchestrator | 2026-01-01 00:56:20.312948 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-01 00:56:20.312958 | orchestrator | Thursday 01 January 2026 00:53:09 +0000 (0:00:00.533) 0:00:10.956 ****** 2026-01-01 00:56:20.312968 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:56:20.312975 | orchestrator | 2026-01-01 00:56:20.312983 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-01 00:56:20.312991 | orchestrator | Thursday 01 January 2026 00:53:10 +0000 (0:00:01.001) 0:00:11.957 ****** 2026-01-01 00:56:20.313005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313044 | orchestrator | 2026-01-01 00:56:20.313052 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-01 00:56:20.313060 | orchestrator | Thursday 01 January 2026 00:53:12 +0000 (0:00:01.287) 0:00:13.245 ****** 2026-01-01 00:56:20.313075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313108 | orchestrator | 2026-01-01 00:56:20.313116 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-01 00:56:20.313130 | orchestrator | Thursday 01 January 2026 00:53:15 +0000 (0:00:03.060) 0:00:16.306 ****** 2026-01-01 00:56:20.313143 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-01 00:56:20.313157 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-01 00:56:20.313170 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-01 00:56:20.313183 | orchestrator | 2026-01-01 00:56:20.313196 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-01 00:56:20.313245 | orchestrator | Thursday 01 January 2026 00:53:17 +0000 (0:00:02.027) 0:00:18.334 ****** 2026-01-01 00:56:20.313265 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-01 00:56:20.313279 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-01 00:56:20.313288 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-01 00:56:20.313295 | orchestrator | 2026-01-01 00:56:20.313303 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-01 00:56:20.313318 | orchestrator | Thursday 01 January 2026 00:53:19 +0000 (0:00:02.051) 0:00:20.386 ****** 2026-01-01 00:56:20.313326 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-01 00:56:20.313334 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-01 00:56:20.313342 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-01 00:56:20.313350 | orchestrator | 2026-01-01 00:56:20.313358 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-01 00:56:20.313366 | orchestrator | Thursday 01 January 2026 00:53:20 +0000 (0:00:01.326) 0:00:21.712 ****** 2026-01-01 00:56:20.313374 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-01 00:56:20.313384 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-01 00:56:20.313396 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-01 00:56:20.313408 | orchestrator | 2026-01-01 00:56:20.313420 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-01 00:56:20.313433 | orchestrator | Thursday 01 January 2026 00:53:22 +0000 (0:00:01.782) 0:00:23.494 ****** 2026-01-01 00:56:20.313446 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-01 00:56:20.313458 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-01 00:56:20.313472 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-01 00:56:20.313486 | orchestrator | 2026-01-01 00:56:20.313497 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-01 00:56:20.313512 | orchestrator | Thursday 01 January 2026 00:53:24 +0000 (0:00:02.008) 0:00:25.503 ****** 2026-01-01 00:56:20.313521 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-01 00:56:20.313529 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-01 00:56:20.313536 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-01 00:56:20.313544 | orchestrator | 2026-01-01 00:56:20.313552 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-01 00:56:20.313560 | orchestrator | Thursday 01 January 2026 00:53:25 +0000 (0:00:01.613) 0:00:27.117 ****** 2026-01-01 00:56:20.313568 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:56:20.313576 | orchestrator | 2026-01-01 00:56:20.313584 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-01-01 00:56:20.313592 | orchestrator | Thursday 01 January 2026 00:53:27 +0000 (0:00:01.302) 0:00:28.420 ****** 2026-01-01 00:56:20.313602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313668 | orchestrator | 2026-01-01 00:56:20.313676 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-01-01 00:56:20.313684 | orchestrator | Thursday 01 January 2026 00:53:28 +0000 (0:00:01.418) 0:00:29.838 ****** 2026-01-01 00:56:20.313693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:56:20.313702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:56:20.313711 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:56:20.313719 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:56:20.313738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:56:20.313747 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:56:20.313755 | orchestrator | 2026-01-01 00:56:20.313763 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-01-01 00:56:20.313777 | orchestrator | Thursday 01 January 2026 00:53:29 +0000 (0:00:00.518) 0:00:30.356 ****** 2026-01-01 00:56:20.313785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:56:20.313794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:56:20.313803 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:56:20.313811 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:56:20.313828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:56:20.313837 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:56:20.313844 | orchestrator | 2026-01-01 00:56:20.313853 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-01-01 00:56:20.313865 | orchestrator | Thursday 01 January 2026 00:53:30 +0000 (0:00:00.905) 0:00:31.262 ****** 2026-01-01 00:56:20.313874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:56:20.313920 | orchestrator | 2026-01-01 00:56:20.313933 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-01-01 00:56:20.313945 | orchestrator | Thursday 01 January 2026 00:53:31 +0000 (0:00:01.377) 0:00:32.639 ****** 2026-01-01 00:56:20.313959 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 00:56:20.313973 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:56:20.313986 | orchestrator | } 2026-01-01 00:56:20.313998 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:56:20.314006 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:56:20.314014 | orchestrator | } 2026-01-01 00:56:20.314071 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:56:20.314079 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:56:20.314092 | orchestrator | } 2026-01-01 00:56:20.314100 | orchestrator | 2026-01-01 00:56:20.314109 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 00:56:20.314117 | orchestrator | Thursday 01 January 2026 00:53:31 +0000 (0:00:00.480) 0:00:33.120 ****** 2026-01-01 00:56:20.314141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:56:20.314151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:56:20.314160 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:56:20.314168 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:56:20.314177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:56:20.314185 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:56:20.314193 | orchestrator | 2026-01-01 00:56:20.314201 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-01 00:56:20.314209 | orchestrator | Thursday 01 January 2026 00:53:33 +0000 (0:00:01.272) 0:00:34.393 ****** 2026-01-01 00:56:20.314218 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:56:20.314226 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:56:20.314234 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:56:20.314242 | orchestrator | 2026-01-01 00:56:20.314250 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-01 00:56:20.314258 | orchestrator | Thursday 01 January 2026 00:53:34 +0000 (0:00:00.956) 0:00:35.349 ****** 2026-01-01 00:56:20.314271 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:56:20.314279 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:56:20.314287 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:56:20.314295 | orchestrator | 2026-01-01 00:56:20.314303 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-01 00:56:20.314315 | orchestrator | Thursday 01 January 2026 00:53:43 +0000 (0:00:09.152) 0:00:44.502 ****** 2026-01-01 00:56:20.314323 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:56:20.314331 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:56:20.314339 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:56:20.314347 | orchestrator | 2026-01-01 00:56:20.314355 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-01 00:56:20.314363 | orchestrator | 2026-01-01 00:56:20.314371 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-01 00:56:20.314383 | orchestrator | Thursday 01 January 2026 00:53:43 +0000 (0:00:00.575) 0:00:45.078 ****** 2026-01-01 00:56:20.314391 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:56:20.314399 | orchestrator | 2026-01-01 00:56:20.314407 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-01 00:56:20.314415 | orchestrator | Thursday 01 January 2026 00:53:44 +0000 (0:00:00.741) 0:00:45.819 ****** 2026-01-01 00:56:20.314423 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:56:20.314431 | orchestrator | 2026-01-01 00:56:20.314439 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-01 00:56:20.314447 | orchestrator | Thursday 01 January 2026 00:53:44 +0000 (0:00:00.156) 0:00:45.976 ****** 2026-01-01 00:56:20.314455 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:56:20.314463 | orchestrator | 2026-01-01 00:56:20.314471 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-01 00:56:20.314479 | orchestrator | Thursday 01 January 2026 00:53:51 +0000 (0:00:07.147) 0:00:53.124 ****** 2026-01-01 00:56:20.314486 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:56:20.314495 | orchestrator | 2026-01-01 00:56:20.314503 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-01 00:56:20.314511 | orchestrator | 2026-01-01 00:56:20.314519 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-01 00:56:20.314527 | orchestrator | Thursday 01 January 2026 00:55:41 +0000 (0:01:49.382) 0:02:42.506 ****** 2026-01-01 00:56:20.314535 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:56:20.314543 | orchestrator | 2026-01-01 00:56:20.314550 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-01 00:56:20.314558 | orchestrator | Thursday 01 January 2026 00:55:42 +0000 (0:00:00.752) 0:02:43.258 ****** 2026-01-01 00:56:20.314566 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:56:20.314574 | orchestrator | 2026-01-01 00:56:20.314582 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-01 00:56:20.314590 | orchestrator | Thursday 01 January 2026 00:55:42 +0000 (0:00:00.116) 0:02:43.375 ****** 2026-01-01 00:56:20.314598 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:56:20.314606 | orchestrator | 2026-01-01 00:56:20.314635 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-01 00:56:20.314644 | orchestrator | Thursday 01 January 2026 00:55:43 +0000 (0:00:01.695) 0:02:45.070 ****** 2026-01-01 00:56:20.314652 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:56:20.314660 | orchestrator | 2026-01-01 00:56:20.314668 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-01 00:56:20.314676 | orchestrator | 2026-01-01 00:56:20.314684 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-01 00:56:20.314692 | orchestrator | Thursday 01 January 2026 00:55:57 +0000 (0:00:13.490) 0:02:58.560 ****** 2026-01-01 00:56:20.314700 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:56:20.314707 | orchestrator | 2026-01-01 00:56:20.314715 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-01 00:56:20.314729 | orchestrator | Thursday 01 January 2026 00:55:58 +0000 (0:00:00.907) 0:02:59.468 ****** 2026-01-01 00:56:20.314736 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:56:20.314744 | orchestrator | 2026-01-01 00:56:20.314752 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-01 00:56:20.314760 | orchestrator | Thursday 01 January 2026 00:55:58 +0000 (0:00:00.318) 0:02:59.786 ****** 2026-01-01 00:56:20.314768 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:56:20.314776 | orchestrator | 2026-01-01 00:56:20.314784 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-01 00:56:20.314792 | orchestrator | Thursday 01 January 2026 00:56:00 +0000 (0:00:02.303) 0:03:02.090 ****** 2026-01-01 00:56:20.314838 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:56:20.314847 | orchestrator | 2026-01-01 00:56:20.314854 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-01 00:56:20.314862 | orchestrator | 2026-01-01 00:56:20.314870 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-01 00:56:20.314878 | orchestrator | Thursday 01 January 2026 00:56:12 +0000 (0:00:11.837) 0:03:13.928 ****** 2026-01-01 00:56:20.314886 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:56:20.314894 | orchestrator | 2026-01-01 00:56:20.314902 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-01 00:56:20.314909 | orchestrator | Thursday 01 January 2026 00:56:13 +0000 (0:00:00.694) 0:03:14.623 ****** 2026-01-01 00:56:20.314917 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:56:20.314925 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:56:20.314933 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:56:20.314941 | orchestrator | 2026-01-01 00:56:20.314949 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:56:20.314957 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-01 00:56:20.314967 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:56:20.314975 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:56:20.314987 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:56:20.314996 | orchestrator | 2026-01-01 00:56:20.315003 | orchestrator | 2026-01-01 00:56:20.315011 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:56:20.315019 | orchestrator | Thursday 01 January 2026 00:56:16 +0000 (0:00:03.327) 0:03:17.950 ****** 2026-01-01 00:56:20.315027 | orchestrator | =============================================================================== 2026-01-01 00:56:20.315035 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 134.71s 2026-01-01 00:56:20.315048 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.15s 2026-01-01 00:56:20.315057 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.15s 2026-01-01 00:56:20.315065 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.36s 2026-01-01 00:56:20.315073 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.33s 2026-01-01 00:56:20.315081 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.06s 2026-01-01 00:56:20.315088 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.40s 2026-01-01 00:56:20.315097 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.05s 2026-01-01 00:56:20.315104 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.03s 2026-01-01 00:56:20.315112 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.01s 2026-01-01 00:56:20.315126 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.78s 2026-01-01 00:56:20.315134 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.62s 2026-01-01 00:56:20.315142 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.42s 2026-01-01 00:56:20.315150 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.38s 2026-01-01 00:56:20.315158 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.33s 2026-01-01 00:56:20.315166 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.30s 2026-01-01 00:56:20.315174 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.29s 2026-01-01 00:56:20.315182 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.27s 2026-01-01 00:56:20.315190 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.00s 2026-01-01 00:56:20.315198 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.00s 2026-01-01 00:56:20.315206 | orchestrator | 2026-01-01 00:56:20 | INFO  | Task 810dd3de-ef67-4d31-93c2-0d7d3a83ca25 is in state SUCCESS 2026-01-01 00:56:20.315214 | orchestrator | 2026-01-01 00:56:20 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:20.315222 | orchestrator | 2026-01-01 00:56:20 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:20.315230 | orchestrator | 2026-01-01 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:23.339123 | orchestrator | 2026-01-01 00:56:23 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:23.339961 | orchestrator | 2026-01-01 00:56:23 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:23.341344 | orchestrator | 2026-01-01 00:56:23 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:23.341547 | orchestrator | 2026-01-01 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:26.391306 | orchestrator | 2026-01-01 00:56:26 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:26.392256 | orchestrator | 2026-01-01 00:56:26 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:26.395137 | orchestrator | 2026-01-01 00:56:26 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:26.395192 | orchestrator | 2026-01-01 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:29.436891 | orchestrator | 2026-01-01 00:56:29 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:29.438559 | orchestrator | 2026-01-01 00:56:29 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:29.441761 | orchestrator | 2026-01-01 00:56:29 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:29.441788 | orchestrator | 2026-01-01 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:32.479551 | orchestrator | 2026-01-01 00:56:32 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:32.481584 | orchestrator | 2026-01-01 00:56:32 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:32.483928 | orchestrator | 2026-01-01 00:56:32 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:32.483976 | orchestrator | 2026-01-01 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:35.519771 | orchestrator | 2026-01-01 00:56:35 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:35.520514 | orchestrator | 2026-01-01 00:56:35 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:35.521525 | orchestrator | 2026-01-01 00:56:35 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:35.521545 | orchestrator | 2026-01-01 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:38.563004 | orchestrator | 2026-01-01 00:56:38 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:38.563680 | orchestrator | 2026-01-01 00:56:38 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:38.564691 | orchestrator | 2026-01-01 00:56:38 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:38.564720 | orchestrator | 2026-01-01 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:41.603304 | orchestrator | 2026-01-01 00:56:41 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:41.605792 | orchestrator | 2026-01-01 00:56:41 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:41.607254 | orchestrator | 2026-01-01 00:56:41 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:41.607289 | orchestrator | 2026-01-01 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:44.650748 | orchestrator | 2026-01-01 00:56:44 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:44.651012 | orchestrator | 2026-01-01 00:56:44 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:44.651750 | orchestrator | 2026-01-01 00:56:44 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:44.651862 | orchestrator | 2026-01-01 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:47.692641 | orchestrator | 2026-01-01 00:56:47 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:47.693591 | orchestrator | 2026-01-01 00:56:47 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:47.695189 | orchestrator | 2026-01-01 00:56:47 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:47.695238 | orchestrator | 2026-01-01 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:50.735266 | orchestrator | 2026-01-01 00:56:50 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:50.735972 | orchestrator | 2026-01-01 00:56:50 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:50.736379 | orchestrator | 2026-01-01 00:56:50 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:50.736525 | orchestrator | 2026-01-01 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:53.785336 | orchestrator | 2026-01-01 00:56:53 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:53.786432 | orchestrator | 2026-01-01 00:56:53 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:53.787920 | orchestrator | 2026-01-01 00:56:53 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:53.788170 | orchestrator | 2026-01-01 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:56.828707 | orchestrator | 2026-01-01 00:56:56 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:56.830209 | orchestrator | 2026-01-01 00:56:56 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:56.830274 | orchestrator | 2026-01-01 00:56:56 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:56.830288 | orchestrator | 2026-01-01 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:59.873366 | orchestrator | 2026-01-01 00:56:59 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:56:59.876347 | orchestrator | 2026-01-01 00:56:59 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:56:59.878857 | orchestrator | 2026-01-01 00:56:59 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:56:59.878957 | orchestrator | 2026-01-01 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:02.922102 | orchestrator | 2026-01-01 00:57:02 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:02.924883 | orchestrator | 2026-01-01 00:57:02 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:02.927514 | orchestrator | 2026-01-01 00:57:02 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:02.927647 | orchestrator | 2026-01-01 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:05.971806 | orchestrator | 2026-01-01 00:57:05 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:05.971912 | orchestrator | 2026-01-01 00:57:05 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:05.971928 | orchestrator | 2026-01-01 00:57:05 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:05.971941 | orchestrator | 2026-01-01 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:08.995104 | orchestrator | 2026-01-01 00:57:08 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:08.996995 | orchestrator | 2026-01-01 00:57:08 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:08.999555 | orchestrator | 2026-01-01 00:57:09 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:08.999650 | orchestrator | 2026-01-01 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:12.054830 | orchestrator | 2026-01-01 00:57:12 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:12.057829 | orchestrator | 2026-01-01 00:57:12 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:12.059435 | orchestrator | 2026-01-01 00:57:12 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:12.060387 | orchestrator | 2026-01-01 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:15.109797 | orchestrator | 2026-01-01 00:57:15 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:15.116337 | orchestrator | 2026-01-01 00:57:15 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:15.116435 | orchestrator | 2026-01-01 00:57:15 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:15.116451 | orchestrator | 2026-01-01 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:18.159349 | orchestrator | 2026-01-01 00:57:18 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:18.159929 | orchestrator | 2026-01-01 00:57:18 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:18.165345 | orchestrator | 2026-01-01 00:57:18 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:18.165417 | orchestrator | 2026-01-01 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:21.206676 | orchestrator | 2026-01-01 00:57:21 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:21.207265 | orchestrator | 2026-01-01 00:57:21 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:21.208523 | orchestrator | 2026-01-01 00:57:21 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:21.208617 | orchestrator | 2026-01-01 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:24.254576 | orchestrator | 2026-01-01 00:57:24 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:24.256893 | orchestrator | 2026-01-01 00:57:24 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:24.259344 | orchestrator | 2026-01-01 00:57:24 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:24.259394 | orchestrator | 2026-01-01 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:27.297001 | orchestrator | 2026-01-01 00:57:27 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:27.307213 | orchestrator | 2026-01-01 00:57:27 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:27.307801 | orchestrator | 2026-01-01 00:57:27 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:27.307834 | orchestrator | 2026-01-01 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:30.346838 | orchestrator | 2026-01-01 00:57:30 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:30.349828 | orchestrator | 2026-01-01 00:57:30 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:30.353553 | orchestrator | 2026-01-01 00:57:30 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:30.353601 | orchestrator | 2026-01-01 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:33.390705 | orchestrator | 2026-01-01 00:57:33 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:33.391050 | orchestrator | 2026-01-01 00:57:33 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:33.391835 | orchestrator | 2026-01-01 00:57:33 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:33.391861 | orchestrator | 2026-01-01 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:36.426870 | orchestrator | 2026-01-01 00:57:36 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:36.427151 | orchestrator | 2026-01-01 00:57:36 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:36.428029 | orchestrator | 2026-01-01 00:57:36 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:36.428172 | orchestrator | 2026-01-01 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:39.476632 | orchestrator | 2026-01-01 00:57:39 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:39.477944 | orchestrator | 2026-01-01 00:57:39 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:39.478741 | orchestrator | 2026-01-01 00:57:39 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:39.478881 | orchestrator | 2026-01-01 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:42.526252 | orchestrator | 2026-01-01 00:57:42 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:42.531540 | orchestrator | 2026-01-01 00:57:42 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:42.533943 | orchestrator | 2026-01-01 00:57:42 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:42.533975 | orchestrator | 2026-01-01 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:45.575659 | orchestrator | 2026-01-01 00:57:45 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:45.579529 | orchestrator | 2026-01-01 00:57:45 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:45.582290 | orchestrator | 2026-01-01 00:57:45 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:45.582366 | orchestrator | 2026-01-01 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:48.633043 | orchestrator | 2026-01-01 00:57:48 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state STARTED 2026-01-01 00:57:48.633850 | orchestrator | 2026-01-01 00:57:48 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:48.636258 | orchestrator | 2026-01-01 00:57:48 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:48.636283 | orchestrator | 2026-01-01 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:51.689248 | orchestrator | 2026-01-01 00:57:51.689352 | orchestrator | 2026-01-01 00:57:51 | INFO  | Task 927e7803-744d-4726-a813-9ae87df1cba8 is in state SUCCESS 2026-01-01 00:57:51.692565 | orchestrator | 2026-01-01 00:57:51.692635 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:57:51.692644 | orchestrator | 2026-01-01 00:57:51.692650 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:57:51.692657 | orchestrator | Thursday 01 January 2026 00:53:52 +0000 (0:00:00.228) 0:00:00.228 ****** 2026-01-01 00:57:51.692717 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.692725 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.692731 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.692737 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:51.692744 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:51.692749 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:51.692755 | orchestrator | 2026-01-01 00:57:51.692762 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:57:51.692768 | orchestrator | Thursday 01 January 2026 00:53:53 +0000 (0:00:00.910) 0:00:01.138 ****** 2026-01-01 00:57:51.692774 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-01 00:57:51.692787 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-01 00:57:51.692793 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-01 00:57:51.692799 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-01 00:57:51.692805 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-01 00:57:51.692811 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-01 00:57:51.692817 | orchestrator | 2026-01-01 00:57:51.692823 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-01 00:57:51.692829 | orchestrator | 2026-01-01 00:57:51.692835 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-01 00:57:51.692841 | orchestrator | Thursday 01 January 2026 00:53:54 +0000 (0:00:01.310) 0:00:02.449 ****** 2026-01-01 00:57:51.692848 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:51.692868 | orchestrator | 2026-01-01 00:57:51.692879 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-01 00:57:51.692889 | orchestrator | Thursday 01 January 2026 00:53:57 +0000 (0:00:02.560) 0:00:05.010 ****** 2026-01-01 00:57:51.692901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.692916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.692926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.692936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.692946 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.692957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.692966 | orchestrator | 2026-01-01 00:57:51.692987 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-01 00:57:51.693051 | orchestrator | Thursday 01 January 2026 00:54:00 +0000 (0:00:02.568) 0:00:07.578 ****** 2026-01-01 00:57:51.693082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693124 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693136 | orchestrator | 2026-01-01 00:57:51.693141 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-01 00:57:51.693147 | orchestrator | Thursday 01 January 2026 00:54:01 +0000 (0:00:01.813) 0:00:09.391 ****** 2026-01-01 00:57:51.693153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693213 | orchestrator | 2026-01-01 00:57:51.693220 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-01 00:57:51.693227 | orchestrator | Thursday 01 January 2026 00:54:03 +0000 (0:00:01.680) 0:00:11.072 ****** 2026-01-01 00:57:51.693234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693270 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693277 | orchestrator | 2026-01-01 00:57:51.693287 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-01-01 00:57:51.693295 | orchestrator | Thursday 01 January 2026 00:54:05 +0000 (0:00:01.849) 0:00:12.922 ****** 2026-01-01 00:57:51.693306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.693366 | orchestrator | 2026-01-01 00:57:51.693374 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-01-01 00:57:51.693381 | orchestrator | Thursday 01 January 2026 00:54:07 +0000 (0:00:01.848) 0:00:14.770 ****** 2026-01-01 00:57:51.693388 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 00:57:51.693395 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.693402 | orchestrator | } 2026-01-01 00:57:51.693412 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:57:51.693423 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.693433 | orchestrator | } 2026-01-01 00:57:51.693442 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:57:51.693449 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.693456 | orchestrator | } 2026-01-01 00:57:51.693462 | orchestrator | changed: [testbed-node-3] => { 2026-01-01 00:57:51.693469 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.693476 | orchestrator | } 2026-01-01 00:57:51.693483 | orchestrator | changed: [testbed-node-4] => { 2026-01-01 00:57:51.693493 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.693504 | orchestrator | } 2026-01-01 00:57:51.693512 | orchestrator | changed: [testbed-node-5] => { 2026-01-01 00:57:51.693520 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.693530 | orchestrator | } 2026-01-01 00:57:51.693545 | orchestrator | 2026-01-01 00:57:51.693552 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 00:57:51.693558 | orchestrator | Thursday 01 January 2026 00:54:08 +0000 (0:00:00.912) 0:00:15.683 ****** 2026-01-01 00:57:51.693564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.693588 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.693600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.693606 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.693616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.693622 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.693628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.693636 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:51.693646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.693656 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:51.693664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.693670 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:51.693676 | orchestrator | 2026-01-01 00:57:51.693682 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-01 00:57:51.693688 | orchestrator | Thursday 01 January 2026 00:54:09 +0000 (0:00:01.465) 0:00:17.148 ****** 2026-01-01 00:57:51.693694 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.693699 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:51.693705 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:51.693711 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:51.693717 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:51.693723 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:51.693728 | orchestrator | 2026-01-01 00:57:51.693739 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-01 00:57:51.693745 | orchestrator | Thursday 01 January 2026 00:54:13 +0000 (0:00:03.518) 0:00:20.667 ****** 2026-01-01 00:57:51.693751 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-01 00:57:51.693757 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-01 00:57:51.693763 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-01 00:57:51.693768 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-01 00:57:51.693774 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:57:51.693780 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-01 00:57:51.693786 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-01 00:57:51.693791 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:57:51.693797 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:57:51.693803 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:57:51.693809 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:57:51.693815 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-01 00:57:51.693826 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:57:51.693832 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-01 00:57:51.693851 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-01 00:57:51.693857 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-01 00:57:51.693863 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-01 00:57:51.693869 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:57:51.693876 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-01 00:57:51.693882 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:57:51.693888 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:57:51.693894 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:57:51.693900 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:57:51.693906 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:57:51.693912 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:57:51.693917 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:57:51.693923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:57:51.693929 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:57:51.693939 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:57:51.693945 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:57:51.693950 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:57:51.693956 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:57:51.693962 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:57:51.693968 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:57:51.693974 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-01 00:57:51.693980 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:57:51.693986 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:57:51.693992 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-01 00:57:51.693998 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-01 00:57:51.694004 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-01 00:57:51.694009 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-01 00:57:51.694164 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-01 00:57:51.694171 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-01 00:57:51.694177 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-01 00:57:51.694183 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-01 00:57:51.694189 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-01 00:57:51.694195 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-01 00:57:51.694206 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-01 00:57:51.694212 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-01 00:57:51.694218 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-01 00:57:51.694224 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-01 00:57:51.694230 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-01 00:57:51.694259 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-01 00:57:51.694266 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-01 00:57:51.694272 | orchestrator | 2026-01-01 00:57:51.694278 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:57:51.694284 | orchestrator | Thursday 01 January 2026 00:54:36 +0000 (0:00:23.584) 0:00:44.252 ****** 2026-01-01 00:57:51.694298 | orchestrator | 2026-01-01 00:57:51.694304 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:57:51.694310 | orchestrator | Thursday 01 January 2026 00:54:36 +0000 (0:00:00.075) 0:00:44.328 ****** 2026-01-01 00:57:51.694316 | orchestrator | 2026-01-01 00:57:51.694322 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:57:51.694327 | orchestrator | Thursday 01 January 2026 00:54:37 +0000 (0:00:00.215) 0:00:44.543 ****** 2026-01-01 00:57:51.694333 | orchestrator | 2026-01-01 00:57:51.694339 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:57:51.694345 | orchestrator | Thursday 01 January 2026 00:54:37 +0000 (0:00:00.131) 0:00:44.675 ****** 2026-01-01 00:57:51.694351 | orchestrator | 2026-01-01 00:57:51.694357 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:57:51.694362 | orchestrator | Thursday 01 January 2026 00:54:37 +0000 (0:00:00.129) 0:00:44.804 ****** 2026-01-01 00:57:51.694368 | orchestrator | 2026-01-01 00:57:51.694374 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:57:51.694382 | orchestrator | Thursday 01 January 2026 00:54:37 +0000 (0:00:00.068) 0:00:44.873 ****** 2026-01-01 00:57:51.694392 | orchestrator | 2026-01-01 00:57:51.694402 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-01 00:57:51.694412 | orchestrator | Thursday 01 January 2026 00:54:37 +0000 (0:00:00.071) 0:00:44.944 ****** 2026-01-01 00:57:51.694418 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:51.694424 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.694430 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:51.694436 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:51.694442 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.694448 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.694453 | orchestrator | 2026-01-01 00:57:51.694459 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-01 00:57:51.694465 | orchestrator | Thursday 01 January 2026 00:54:39 +0000 (0:00:02.302) 0:00:47.248 ****** 2026-01-01 00:57:51.694471 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.694477 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:51.694482 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:51.694488 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:51.694494 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:51.694500 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:51.694506 | orchestrator | 2026-01-01 00:57:51.694512 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-01 00:57:51.694517 | orchestrator | 2026-01-01 00:57:51.694523 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-01 00:57:51.694529 | orchestrator | Thursday 01 January 2026 00:54:48 +0000 (0:00:08.464) 0:00:55.712 ****** 2026-01-01 00:57:51.694535 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:51.694541 | orchestrator | 2026-01-01 00:57:51.694547 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-01 00:57:51.694553 | orchestrator | Thursday 01 January 2026 00:54:48 +0000 (0:00:00.574) 0:00:56.286 ****** 2026-01-01 00:57:51.694559 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:51.694565 | orchestrator | 2026-01-01 00:57:51.694619 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-01 00:57:51.694627 | orchestrator | Thursday 01 January 2026 00:54:49 +0000 (0:00:00.843) 0:00:57.130 ****** 2026-01-01 00:57:51.694633 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.694639 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.694645 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.694651 | orchestrator | 2026-01-01 00:57:51.694657 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-01 00:57:51.694662 | orchestrator | Thursday 01 January 2026 00:54:50 +0000 (0:00:01.026) 0:00:58.156 ****** 2026-01-01 00:57:51.694673 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.694679 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.694685 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.694691 | orchestrator | 2026-01-01 00:57:51.694696 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-01 00:57:51.694702 | orchestrator | Thursday 01 January 2026 00:54:51 +0000 (0:00:00.425) 0:00:58.582 ****** 2026-01-01 00:57:51.694708 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.694714 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.694720 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.694726 | orchestrator | 2026-01-01 00:57:51.694732 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-01 00:57:51.694742 | orchestrator | Thursday 01 January 2026 00:54:51 +0000 (0:00:00.673) 0:00:59.256 ****** 2026-01-01 00:57:51.694748 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.694753 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.694759 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.694765 | orchestrator | 2026-01-01 00:57:51.694771 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-01 00:57:51.694777 | orchestrator | Thursday 01 January 2026 00:54:52 +0000 (0:00:00.424) 0:00:59.681 ****** 2026-01-01 00:57:51.694783 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.694788 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.694794 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.694800 | orchestrator | 2026-01-01 00:57:51.694806 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-01 00:57:51.694812 | orchestrator | Thursday 01 January 2026 00:54:52 +0000 (0:00:00.471) 0:01:00.153 ****** 2026-01-01 00:57:51.694817 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.694823 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.694829 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.694835 | orchestrator | 2026-01-01 00:57:51.694845 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-01 00:57:51.694851 | orchestrator | Thursday 01 January 2026 00:54:53 +0000 (0:00:00.490) 0:01:00.643 ****** 2026-01-01 00:57:51.694856 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.694862 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.694868 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.694874 | orchestrator | 2026-01-01 00:57:51.694880 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-01 00:57:51.694886 | orchestrator | Thursday 01 January 2026 00:54:53 +0000 (0:00:00.591) 0:01:01.234 ****** 2026-01-01 00:57:51.694892 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.694897 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.694903 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.694909 | orchestrator | 2026-01-01 00:57:51.694915 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-01 00:57:51.694921 | orchestrator | Thursday 01 January 2026 00:54:54 +0000 (0:00:00.403) 0:01:01.638 ****** 2026-01-01 00:57:51.694927 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.694932 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.694938 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.694944 | orchestrator | 2026-01-01 00:57:51.694950 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-01 00:57:51.694956 | orchestrator | Thursday 01 January 2026 00:54:54 +0000 (0:00:00.380) 0:01:02.018 ****** 2026-01-01 00:57:51.694962 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.694968 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.694973 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.694979 | orchestrator | 2026-01-01 00:57:51.694985 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-01 00:57:51.694991 | orchestrator | Thursday 01 January 2026 00:54:54 +0000 (0:00:00.360) 0:01:02.379 ****** 2026-01-01 00:57:51.694997 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695007 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695013 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695019 | orchestrator | 2026-01-01 00:57:51.695025 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-01 00:57:51.695031 | orchestrator | Thursday 01 January 2026 00:54:55 +0000 (0:00:00.901) 0:01:03.280 ****** 2026-01-01 00:57:51.695037 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695042 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695048 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695054 | orchestrator | 2026-01-01 00:57:51.695060 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-01 00:57:51.695066 | orchestrator | Thursday 01 January 2026 00:54:56 +0000 (0:00:00.373) 0:01:03.654 ****** 2026-01-01 00:57:51.695071 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695077 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695083 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695089 | orchestrator | 2026-01-01 00:57:51.695095 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-01 00:57:51.695101 | orchestrator | Thursday 01 January 2026 00:54:56 +0000 (0:00:00.469) 0:01:04.123 ****** 2026-01-01 00:57:51.695107 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695112 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695118 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695124 | orchestrator | 2026-01-01 00:57:51.695130 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-01 00:57:51.695145 | orchestrator | Thursday 01 January 2026 00:54:56 +0000 (0:00:00.393) 0:01:04.517 ****** 2026-01-01 00:57:51.695151 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695164 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695170 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695176 | orchestrator | 2026-01-01 00:57:51.695181 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-01 00:57:51.695187 | orchestrator | Thursday 01 January 2026 00:54:57 +0000 (0:00:00.351) 0:01:04.869 ****** 2026-01-01 00:57:51.695193 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695199 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695205 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695210 | orchestrator | 2026-01-01 00:57:51.695216 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-01 00:57:51.695222 | orchestrator | Thursday 01 January 2026 00:54:57 +0000 (0:00:00.595) 0:01:05.465 ****** 2026-01-01 00:57:51.695228 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695234 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695240 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695246 | orchestrator | 2026-01-01 00:57:51.695251 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-01 00:57:51.695257 | orchestrator | Thursday 01 January 2026 00:54:58 +0000 (0:00:00.304) 0:01:05.770 ****** 2026-01-01 00:57:51.695263 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:51.695269 | orchestrator | 2026-01-01 00:57:51.695279 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-01 00:57:51.695285 | orchestrator | Thursday 01 January 2026 00:54:59 +0000 (0:00:00.770) 0:01:06.540 ****** 2026-01-01 00:57:51.695291 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.695297 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.695303 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.695309 | orchestrator | 2026-01-01 00:57:51.695315 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-01 00:57:51.695321 | orchestrator | Thursday 01 January 2026 00:54:59 +0000 (0:00:00.720) 0:01:07.260 ****** 2026-01-01 00:57:51.695326 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.695332 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.695338 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.695351 | orchestrator | 2026-01-01 00:57:51.695357 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-01 00:57:51.695363 | orchestrator | Thursday 01 January 2026 00:55:00 +0000 (0:00:00.460) 0:01:07.721 ****** 2026-01-01 00:57:51.695369 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695375 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695383 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695389 | orchestrator | 2026-01-01 00:57:51.695395 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-01 00:57:51.695401 | orchestrator | Thursday 01 January 2026 00:55:00 +0000 (0:00:00.358) 0:01:08.079 ****** 2026-01-01 00:57:51.695406 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695448 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695455 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695461 | orchestrator | 2026-01-01 00:57:51.695505 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-01 00:57:51.695512 | orchestrator | Thursday 01 January 2026 00:55:00 +0000 (0:00:00.342) 0:01:08.421 ****** 2026-01-01 00:57:51.695518 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695524 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695529 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695535 | orchestrator | 2026-01-01 00:57:51.695541 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-01 00:57:51.695547 | orchestrator | Thursday 01 January 2026 00:55:01 +0000 (0:00:00.623) 0:01:09.045 ****** 2026-01-01 00:57:51.695553 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695563 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695588 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695600 | orchestrator | 2026-01-01 00:57:51.695609 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-01 00:57:51.695619 | orchestrator | Thursday 01 January 2026 00:55:01 +0000 (0:00:00.400) 0:01:09.445 ****** 2026-01-01 00:57:51.695628 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695637 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695646 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695656 | orchestrator | 2026-01-01 00:57:51.695662 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-01 00:57:51.695668 | orchestrator | Thursday 01 January 2026 00:55:02 +0000 (0:00:00.331) 0:01:09.776 ****** 2026-01-01 00:57:51.695674 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.695680 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.695685 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.695691 | orchestrator | 2026-01-01 00:57:51.695697 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-01 00:57:51.695703 | orchestrator | Thursday 01 January 2026 00:55:02 +0000 (0:00:00.343) 0:01:10.120 ****** 2026-01-01 00:57:51.695711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.695781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.695797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.695815 | orchestrator | 2026-01-01 00:57:51.695821 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-01 00:57:51.695828 | orchestrator | Thursday 01 January 2026 00:55:05 +0000 (0:00:03.295) 0:01:13.416 ****** 2026-01-01 00:57:51.695837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.695902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.695915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.695922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.695928 | orchestrator | 2026-01-01 00:57:51.695934 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-01 00:57:51.695941 | orchestrator | Thursday 01 January 2026 00:55:11 +0000 (0:00:05.839) 0:01:19.255 ****** 2026-01-01 00:57:51.695947 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-01 00:57:51.695958 | orchestrator | 2026-01-01 00:57:51.695964 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-01 00:57:51.695970 | orchestrator | Thursday 01 January 2026 00:55:12 +0000 (0:00:00.640) 0:01:19.896 ****** 2026-01-01 00:57:51.695977 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.695983 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:51.695989 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:51.695995 | orchestrator | 2026-01-01 00:57:51.696002 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-01 00:57:51.696008 | orchestrator | Thursday 01 January 2026 00:55:13 +0000 (0:00:00.909) 0:01:20.805 ****** 2026-01-01 00:57:51.696014 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.696020 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:51.696026 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:51.696032 | orchestrator | 2026-01-01 00:57:51.696039 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-01 00:57:51.696045 | orchestrator | Thursday 01 January 2026 00:55:14 +0000 (0:00:01.638) 0:01:22.444 ****** 2026-01-01 00:57:51.696051 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.696057 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:51.696063 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:51.696069 | orchestrator | 2026-01-01 00:57:51.696076 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-01 00:57:51.696082 | orchestrator | Thursday 01 January 2026 00:55:16 +0000 (0:00:01.782) 0:01:24.226 ****** 2026-01-01 00:57:51.696093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.696100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.696110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.696117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.696124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.696134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.696140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.696147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.696164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.696183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696189 | orchestrator | 2026-01-01 00:57:51.696199 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-01 00:57:51.696206 | orchestrator | Thursday 01 January 2026 00:55:21 +0000 (0:00:04.437) 0:01:28.664 ****** 2026-01-01 00:57:51.696212 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 00:57:51.696218 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.696225 | orchestrator | } 2026-01-01 00:57:51.696231 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:57:51.696237 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.696243 | orchestrator | } 2026-01-01 00:57:51.696250 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:57:51.696256 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.696262 | orchestrator | } 2026-01-01 00:57:51.696268 | orchestrator | 2026-01-01 00:57:51.696274 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 00:57:51.696281 | orchestrator | Thursday 01 January 2026 00:55:21 +0000 (0:00:00.774) 0:01:29.439 ****** 2026-01-01 00:57:51.696287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.696363 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.696370 | orchestrator | 2026-01-01 00:57:51.696376 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-01 00:57:51.696382 | orchestrator | Thursday 01 January 2026 00:55:24 +0000 (0:00:02.901) 0:01:32.340 ****** 2026-01-01 00:57:51.696389 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-01 00:57:51.696396 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-01 00:57:51.696402 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-01 00:57:51.696408 | orchestrator | 2026-01-01 00:57:51.696415 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-01 00:57:51.696421 | orchestrator | Thursday 01 January 2026 00:55:25 +0000 (0:00:01.072) 0:01:33.413 ****** 2026-01-01 00:57:51.696440 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 00:57:51.696446 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.696453 | orchestrator | } 2026-01-01 00:57:51.696459 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:57:51.696465 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.696472 | orchestrator | } 2026-01-01 00:57:51.696478 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:57:51.696484 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.696494 | orchestrator | } 2026-01-01 00:57:51.696501 | orchestrator | 2026-01-01 00:57:51.696507 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:57:51.696513 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:00.812) 0:01:34.226 ****** 2026-01-01 00:57:51.696520 | orchestrator | 2026-01-01 00:57:51.696530 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:57:51.696536 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:00.066) 0:01:34.292 ****** 2026-01-01 00:57:51.696542 | orchestrator | 2026-01-01 00:57:51.696549 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:57:51.696555 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:00.066) 0:01:34.359 ****** 2026-01-01 00:57:51.696561 | orchestrator | 2026-01-01 00:57:51.696567 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-01 00:57:51.696596 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:00.067) 0:01:34.427 ****** 2026-01-01 00:57:51.696603 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.696613 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:51.696619 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:51.696625 | orchestrator | 2026-01-01 00:57:51.696632 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-01 00:57:51.696638 | orchestrator | Thursday 01 January 2026 00:55:43 +0000 (0:00:16.908) 0:01:51.335 ****** 2026-01-01 00:57:51.696644 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.696651 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:51.696657 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:51.696663 | orchestrator | 2026-01-01 00:57:51.696669 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-01 00:57:51.696675 | orchestrator | Thursday 01 January 2026 00:55:55 +0000 (0:00:11.189) 0:02:02.525 ****** 2026-01-01 00:57:51.696682 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-01 00:57:51.696688 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-01 00:57:51.696694 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-01 00:57:51.696701 | orchestrator | 2026-01-01 00:57:51.696707 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-01 00:57:51.696713 | orchestrator | Thursday 01 January 2026 00:56:12 +0000 (0:00:17.530) 0:02:20.056 ****** 2026-01-01 00:57:51.696719 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:51.696725 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:51.696732 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.696738 | orchestrator | 2026-01-01 00:57:51.696744 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-01 00:57:51.696750 | orchestrator | Thursday 01 January 2026 00:56:27 +0000 (0:00:15.262) 0:02:35.318 ****** 2026-01-01 00:57:51.696756 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.696763 | orchestrator | 2026-01-01 00:57:51.696769 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-01 00:57:51.696775 | orchestrator | Thursday 01 January 2026 00:56:27 +0000 (0:00:00.132) 0:02:35.451 ****** 2026-01-01 00:57:51.696781 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.696788 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.696794 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.696801 | orchestrator | 2026-01-01 00:57:51.696807 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-01 00:57:51.696813 | orchestrator | Thursday 01 January 2026 00:56:28 +0000 (0:00:00.835) 0:02:36.286 ****** 2026-01-01 00:57:51.696819 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.696826 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.696832 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.696838 | orchestrator | 2026-01-01 00:57:51.696844 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-01 00:57:51.696851 | orchestrator | Thursday 01 January 2026 00:56:29 +0000 (0:00:00.665) 0:02:36.952 ****** 2026-01-01 00:57:51.696857 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.696863 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.696869 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.696875 | orchestrator | 2026-01-01 00:57:51.696882 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-01 00:57:51.696888 | orchestrator | Thursday 01 January 2026 00:56:30 +0000 (0:00:01.111) 0:02:38.063 ****** 2026-01-01 00:57:51.696898 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.696904 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.696911 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.696917 | orchestrator | 2026-01-01 00:57:51.696923 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-01 00:57:51.696929 | orchestrator | Thursday 01 January 2026 00:56:31 +0000 (0:00:00.656) 0:02:38.720 ****** 2026-01-01 00:57:51.696935 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.696942 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.696948 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.696954 | orchestrator | 2026-01-01 00:57:51.696961 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-01 00:57:51.696967 | orchestrator | Thursday 01 January 2026 00:56:32 +0000 (0:00:00.850) 0:02:39.571 ****** 2026-01-01 00:57:51.696973 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.696979 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.696986 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.696992 | orchestrator | 2026-01-01 00:57:51.696998 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-01 00:57:51.697005 | orchestrator | Thursday 01 January 2026 00:56:32 +0000 (0:00:00.804) 0:02:40.375 ****** 2026-01-01 00:57:51.697011 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-01 00:57:51.697017 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-01 00:57:51.697023 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-01 00:57:51.697030 | orchestrator | 2026-01-01 00:57:51.697036 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-01 00:57:51.697042 | orchestrator | Thursday 01 January 2026 00:56:34 +0000 (0:00:01.162) 0:02:41.537 ****** 2026-01-01 00:57:51.697048 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.697055 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.697061 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.697067 | orchestrator | 2026-01-01 00:57:51.697074 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-01 00:57:51.697083 | orchestrator | Thursday 01 January 2026 00:56:34 +0000 (0:00:00.328) 0:02:41.865 ****** 2026-01-01 00:57:51.697090 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697107 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697114 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697125 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697132 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697138 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697157 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697173 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697190 | orchestrator | 2026-01-01 00:57:51.697196 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-01 00:57:51.697203 | orchestrator | Thursday 01 January 2026 00:56:37 +0000 (0:00:02.851) 0:02:44.717 ****** 2026-01-01 00:57:51.697209 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697216 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697222 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697233 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697250 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697304 | orchestrator | 2026-01-01 00:57:51.697311 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-01 00:57:51.697317 | orchestrator | Thursday 01 January 2026 00:56:43 +0000 (0:00:06.715) 0:02:51.433 ****** 2026-01-01 00:57:51.697323 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-01 00:57:51.697330 | orchestrator | 2026-01-01 00:57:51.697337 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-01 00:57:51.697343 | orchestrator | Thursday 01 January 2026 00:56:44 +0000 (0:00:00.933) 0:02:52.367 ****** 2026-01-01 00:57:51.697349 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.697355 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.697362 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.697368 | orchestrator | 2026-01-01 00:57:51.697375 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-01 00:57:51.697384 | orchestrator | Thursday 01 January 2026 00:56:45 +0000 (0:00:00.880) 0:02:53.247 ****** 2026-01-01 00:57:51.697394 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.697400 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.697407 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.697413 | orchestrator | 2026-01-01 00:57:51.697419 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-01 00:57:51.697426 | orchestrator | Thursday 01 January 2026 00:56:47 +0000 (0:00:01.842) 0:02:55.090 ****** 2026-01-01 00:57:51.697432 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.697438 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.697444 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.697451 | orchestrator | 2026-01-01 00:57:51.697457 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-01 00:57:51.697463 | orchestrator | Thursday 01 January 2026 00:56:49 +0000 (0:00:01.972) 0:02:57.062 ****** 2026-01-01 00:57:51.697470 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697477 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697483 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697490 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697525 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697565 | orchestrator | 2026-01-01 00:57:51.697588 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-01 00:57:51.697595 | orchestrator | Thursday 01 January 2026 00:56:54 +0000 (0:00:04.969) 0:03:02.032 ****** 2026-01-01 00:57:51.697601 | orchestrator | ok: [testbed-node-0] => { 2026-01-01 00:57:51.697608 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.697614 | orchestrator | } 2026-01-01 00:57:51.697620 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:57:51.697627 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.697633 | orchestrator | } 2026-01-01 00:57:51.697639 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:57:51.697645 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.697651 | orchestrator | } 2026-01-01 00:57:51.697658 | orchestrator | 2026-01-01 00:57:51.697664 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 00:57:51.697675 | orchestrator | Thursday 01 January 2026 00:56:54 +0000 (0:00:00.407) 0:03:02.440 ****** 2026-01-01 00:57:51.697686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:57:51.697760 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:57:51.697767 | orchestrator | 2026-01-01 00:57:51.697773 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-01 00:57:51.697780 | orchestrator | Thursday 01 January 2026 00:56:57 +0000 (0:00:02.173) 0:03:04.614 ****** 2026-01-01 00:57:51.697786 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-01 00:57:51.697793 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-01 00:57:51.697799 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-01 00:57:51.697806 | orchestrator | 2026-01-01 00:57:51.697812 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-01 00:57:51.697818 | orchestrator | Thursday 01 January 2026 00:56:58 +0000 (0:00:01.326) 0:03:05.940 ****** 2026-01-01 00:57:51.697824 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 00:57:51.697831 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.697837 | orchestrator | } 2026-01-01 00:57:51.697843 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:57:51.697849 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.697856 | orchestrator | } 2026-01-01 00:57:51.697862 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:57:51.697868 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:57:51.697874 | orchestrator | } 2026-01-01 00:57:51.697880 | orchestrator | 2026-01-01 00:57:51.697886 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:57:51.697893 | orchestrator | Thursday 01 January 2026 00:56:59 +0000 (0:00:00.594) 0:03:06.535 ****** 2026-01-01 00:57:51.697899 | orchestrator | 2026-01-01 00:57:51.697905 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:57:51.697911 | orchestrator | Thursday 01 January 2026 00:56:59 +0000 (0:00:00.068) 0:03:06.604 ****** 2026-01-01 00:57:51.697918 | orchestrator | 2026-01-01 00:57:51.697924 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:57:51.697930 | orchestrator | Thursday 01 January 2026 00:56:59 +0000 (0:00:00.064) 0:03:06.668 ****** 2026-01-01 00:57:51.697936 | orchestrator | 2026-01-01 00:57:51.697942 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-01 00:57:51.697949 | orchestrator | Thursday 01 January 2026 00:56:59 +0000 (0:00:00.068) 0:03:06.737 ****** 2026-01-01 00:57:51.697955 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:51.697961 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:51.697972 | orchestrator | 2026-01-01 00:57:51.697979 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-01 00:57:51.697985 | orchestrator | Thursday 01 January 2026 00:57:13 +0000 (0:00:14.497) 0:03:21.235 ****** 2026-01-01 00:57:51.697991 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:51.697997 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:51.698003 | orchestrator | 2026-01-01 00:57:51.698010 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-01 00:57:51.698041 | orchestrator | Thursday 01 January 2026 00:57:27 +0000 (0:00:13.599) 0:03:34.834 ****** 2026-01-01 00:57:51.698048 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-01 00:57:51.698054 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-01 00:57:51.698060 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-01 00:57:51.698067 | orchestrator | 2026-01-01 00:57:51.698073 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-01 00:57:51.698079 | orchestrator | Thursday 01 January 2026 00:57:41 +0000 (0:00:14.273) 0:03:49.108 ****** 2026-01-01 00:57:51.698085 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:51.698091 | orchestrator | 2026-01-01 00:57:51.698098 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-01 00:57:51.698104 | orchestrator | Thursday 01 January 2026 00:57:41 +0000 (0:00:00.147) 0:03:49.255 ****** 2026-01-01 00:57:51.698110 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.698117 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.698123 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.698129 | orchestrator | 2026-01-01 00:57:51.698135 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-01 00:57:51.698142 | orchestrator | Thursday 01 January 2026 00:57:42 +0000 (0:00:01.004) 0:03:50.260 ****** 2026-01-01 00:57:51.698148 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.698154 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.698160 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.698166 | orchestrator | 2026-01-01 00:57:51.698173 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-01 00:57:51.698179 | orchestrator | Thursday 01 January 2026 00:57:43 +0000 (0:00:00.657) 0:03:50.917 ****** 2026-01-01 00:57:51.698185 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.698191 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.698197 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.698204 | orchestrator | 2026-01-01 00:57:51.698210 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-01 00:57:51.698221 | orchestrator | Thursday 01 January 2026 00:57:44 +0000 (0:00:01.134) 0:03:52.052 ****** 2026-01-01 00:57:51.698228 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:51.698234 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:51.698240 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:51.698246 | orchestrator | 2026-01-01 00:57:51.698253 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-01 00:57:51.698259 | orchestrator | Thursday 01 January 2026 00:57:45 +0000 (0:00:00.638) 0:03:52.691 ****** 2026-01-01 00:57:51.698265 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.698272 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.698278 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.698284 | orchestrator | 2026-01-01 00:57:51.698290 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-01 00:57:51.698297 | orchestrator | Thursday 01 January 2026 00:57:46 +0000 (0:00:00.850) 0:03:53.541 ****** 2026-01-01 00:57:51.698303 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:51.698309 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:51.698315 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:51.698321 | orchestrator | 2026-01-01 00:57:51.698331 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-01 00:57:51.698338 | orchestrator | Thursday 01 January 2026 00:57:46 +0000 (0:00:00.719) 0:03:54.260 ****** 2026-01-01 00:57:51.698344 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-01 00:57:51.698355 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-01 00:57:51.698361 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-01 00:57:51.698368 | orchestrator | 2026-01-01 00:57:51.698374 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:57:51.698380 | orchestrator | testbed-node-0 : ok=65  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-01 00:57:51.698387 | orchestrator | testbed-node-1 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-01 00:57:51.698393 | orchestrator | testbed-node-2 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-01 00:57:51.698400 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:57:51.698406 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:57:51.698412 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:57:51.698419 | orchestrator | 2026-01-01 00:57:51.698425 | orchestrator | 2026-01-01 00:57:51.698432 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:57:51.698438 | orchestrator | Thursday 01 January 2026 00:57:48 +0000 (0:00:01.482) 0:03:55.742 ****** 2026-01-01 00:57:51.698444 | orchestrator | =============================================================================== 2026-01-01 00:57:51.698451 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 31.80s 2026-01-01 00:57:51.698457 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 31.41s 2026-01-01 00:57:51.698463 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 24.79s 2026-01-01 00:57:51.698469 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.58s 2026-01-01 00:57:51.698476 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.26s 2026-01-01 00:57:51.698482 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.46s 2026-01-01 00:57:51.698488 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.72s 2026-01-01 00:57:51.698494 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.84s 2026-01-01 00:57:51.698500 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.97s 2026-01-01 00:57:51.698507 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.44s 2026-01-01 00:57:51.698513 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.52s 2026-01-01 00:57:51.698519 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.30s 2026-01-01 00:57:51.698525 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.90s 2026-01-01 00:57:51.698531 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.85s 2026-01-01 00:57:51.698538 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.57s 2026-01-01 00:57:51.698547 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.56s 2026-01-01 00:57:51.698557 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.30s 2026-01-01 00:57:51.698567 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.17s 2026-01-01 00:57:51.698602 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.97s 2026-01-01 00:57:51.698612 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.85s 2026-01-01 00:57:51.698622 | orchestrator | 2026-01-01 00:57:51 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:51.698640 | orchestrator | 2026-01-01 00:57:51 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:51.698655 | orchestrator | 2026-01-01 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:54.743860 | orchestrator | 2026-01-01 00:57:54 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:54.746006 | orchestrator | 2026-01-01 00:57:54 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:54.746139 | orchestrator | 2026-01-01 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:57.793078 | orchestrator | 2026-01-01 00:57:57 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:57:57.795722 | orchestrator | 2026-01-01 00:57:57 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:57:57.795830 | orchestrator | 2026-01-01 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:00.841784 | orchestrator | 2026-01-01 00:58:00 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:00.843848 | orchestrator | 2026-01-01 00:58:00 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:00.844125 | orchestrator | 2026-01-01 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:03.895007 | orchestrator | 2026-01-01 00:58:03 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:03.895109 | orchestrator | 2026-01-01 00:58:03 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:03.895123 | orchestrator | 2026-01-01 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:06.940269 | orchestrator | 2026-01-01 00:58:06 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:06.941268 | orchestrator | 2026-01-01 00:58:06 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:06.941384 | orchestrator | 2026-01-01 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:09.986822 | orchestrator | 2026-01-01 00:58:09 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:09.990546 | orchestrator | 2026-01-01 00:58:09 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:09.990619 | orchestrator | 2026-01-01 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:13.034713 | orchestrator | 2026-01-01 00:58:13 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:13.035043 | orchestrator | 2026-01-01 00:58:13 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:13.035162 | orchestrator | 2026-01-01 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:16.074434 | orchestrator | 2026-01-01 00:58:16 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:16.074628 | orchestrator | 2026-01-01 00:58:16 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:16.074645 | orchestrator | 2026-01-01 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:19.119994 | orchestrator | 2026-01-01 00:58:19 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:19.121039 | orchestrator | 2026-01-01 00:58:19 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:19.122519 | orchestrator | 2026-01-01 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:22.169055 | orchestrator | 2026-01-01 00:58:22 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:22.169896 | orchestrator | 2026-01-01 00:58:22 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:22.169912 | orchestrator | 2026-01-01 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:25.211989 | orchestrator | 2026-01-01 00:58:25 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:25.213447 | orchestrator | 2026-01-01 00:58:25 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:25.213471 | orchestrator | 2026-01-01 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:28.268444 | orchestrator | 2026-01-01 00:58:28 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:28.271142 | orchestrator | 2026-01-01 00:58:28 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:28.271378 | orchestrator | 2026-01-01 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:31.320284 | orchestrator | 2026-01-01 00:58:31 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:31.321296 | orchestrator | 2026-01-01 00:58:31 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:31.321332 | orchestrator | 2026-01-01 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:34.369886 | orchestrator | 2026-01-01 00:58:34 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:34.372692 | orchestrator | 2026-01-01 00:58:34 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:34.372774 | orchestrator | 2026-01-01 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:37.426280 | orchestrator | 2026-01-01 00:58:37 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:37.427424 | orchestrator | 2026-01-01 00:58:37 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:37.427874 | orchestrator | 2026-01-01 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:40.472866 | orchestrator | 2026-01-01 00:58:40 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:40.474951 | orchestrator | 2026-01-01 00:58:40 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:40.474992 | orchestrator | 2026-01-01 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:43.531101 | orchestrator | 2026-01-01 00:58:43 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:43.531903 | orchestrator | 2026-01-01 00:58:43 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:43.532521 | orchestrator | 2026-01-01 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:46.580648 | orchestrator | 2026-01-01 00:58:46 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:46.583568 | orchestrator | 2026-01-01 00:58:46 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:46.583608 | orchestrator | 2026-01-01 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:49.625374 | orchestrator | 2026-01-01 00:58:49 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:49.626328 | orchestrator | 2026-01-01 00:58:49 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:49.626390 | orchestrator | 2026-01-01 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:52.676421 | orchestrator | 2026-01-01 00:58:52 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:52.678590 | orchestrator | 2026-01-01 00:58:52 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:52.678639 | orchestrator | 2026-01-01 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:55.726696 | orchestrator | 2026-01-01 00:58:55 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:55.729827 | orchestrator | 2026-01-01 00:58:55 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:55.729885 | orchestrator | 2026-01-01 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:58.776491 | orchestrator | 2026-01-01 00:58:58 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:58:58.779749 | orchestrator | 2026-01-01 00:58:58 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:58:58.779798 | orchestrator | 2026-01-01 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:01.835156 | orchestrator | 2026-01-01 00:59:01 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:01.838156 | orchestrator | 2026-01-01 00:59:01 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:01.838195 | orchestrator | 2026-01-01 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:04.882475 | orchestrator | 2026-01-01 00:59:04 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:04.882674 | orchestrator | 2026-01-01 00:59:04 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:04.885757 | orchestrator | 2026-01-01 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:07.933554 | orchestrator | 2026-01-01 00:59:07 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:07.935250 | orchestrator | 2026-01-01 00:59:07 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:07.935322 | orchestrator | 2026-01-01 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:10.971034 | orchestrator | 2026-01-01 00:59:10 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:10.974279 | orchestrator | 2026-01-01 00:59:10 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:10.974329 | orchestrator | 2026-01-01 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:14.025372 | orchestrator | 2026-01-01 00:59:14 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:14.027164 | orchestrator | 2026-01-01 00:59:14 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:14.027212 | orchestrator | 2026-01-01 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:17.079527 | orchestrator | 2026-01-01 00:59:17 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:17.081948 | orchestrator | 2026-01-01 00:59:17 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:17.081979 | orchestrator | 2026-01-01 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:20.130613 | orchestrator | 2026-01-01 00:59:20 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:20.131933 | orchestrator | 2026-01-01 00:59:20 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:20.131955 | orchestrator | 2026-01-01 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:23.170453 | orchestrator | 2026-01-01 00:59:23 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:23.171128 | orchestrator | 2026-01-01 00:59:23 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:23.171174 | orchestrator | 2026-01-01 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:26.225325 | orchestrator | 2026-01-01 00:59:26 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:26.228006 | orchestrator | 2026-01-01 00:59:26 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:26.228323 | orchestrator | 2026-01-01 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:29.272647 | orchestrator | 2026-01-01 00:59:29 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:29.275582 | orchestrator | 2026-01-01 00:59:29 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:29.275868 | orchestrator | 2026-01-01 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:32.328948 | orchestrator | 2026-01-01 00:59:32 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:32.330581 | orchestrator | 2026-01-01 00:59:32 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:32.330769 | orchestrator | 2026-01-01 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:35.382802 | orchestrator | 2026-01-01 00:59:35 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:35.385298 | orchestrator | 2026-01-01 00:59:35 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:35.385357 | orchestrator | 2026-01-01 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:38.441382 | orchestrator | 2026-01-01 00:59:38 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:38.444300 | orchestrator | 2026-01-01 00:59:38 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:38.444342 | orchestrator | 2026-01-01 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:41.486847 | orchestrator | 2026-01-01 00:59:41 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:41.486960 | orchestrator | 2026-01-01 00:59:41 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:41.486977 | orchestrator | 2026-01-01 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:44.528701 | orchestrator | 2026-01-01 00:59:44 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:44.533054 | orchestrator | 2026-01-01 00:59:44 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:44.533156 | orchestrator | 2026-01-01 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:47.575498 | orchestrator | 2026-01-01 00:59:47 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state STARTED 2026-01-01 00:59:47.575649 | orchestrator | 2026-01-01 00:59:47 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:47.575667 | orchestrator | 2026-01-01 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:50.628332 | orchestrator | 2026-01-01 00:59:50 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 00:59:50.638733 | orchestrator | 2026-01-01 00:59:50 | INFO  | Task 66341c0c-394a-4bd6-ad5e-89aad60a3200 is in state SUCCESS 2026-01-01 00:59:50.639772 | orchestrator | 2026-01-01 00:59:50.641294 | orchestrator | 2026-01-01 00:59:50.641338 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:59:50.641352 | orchestrator | 2026-01-01 00:59:50.641363 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:59:50.641374 | orchestrator | Thursday 01 January 2026 00:52:28 +0000 (0:00:00.636) 0:00:00.637 ****** 2026-01-01 00:59:50.641386 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.641397 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.641408 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.641419 | orchestrator | 2026-01-01 00:59:50.641430 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:59:50.641441 | orchestrator | Thursday 01 January 2026 00:52:28 +0000 (0:00:00.650) 0:00:01.287 ****** 2026-01-01 00:59:50.641452 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-01 00:59:50.641463 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-01 00:59:50.641474 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-01 00:59:50.641484 | orchestrator | 2026-01-01 00:59:50.641495 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-01 00:59:50.641506 | orchestrator | 2026-01-01 00:59:50.641594 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-01 00:59:50.641624 | orchestrator | Thursday 01 January 2026 00:52:29 +0000 (0:00:00.594) 0:00:01.881 ****** 2026-01-01 00:59:50.641647 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.641665 | orchestrator | 2026-01-01 00:59:50.641682 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-01 00:59:50.641824 | orchestrator | Thursday 01 January 2026 00:52:30 +0000 (0:00:00.894) 0:00:02.776 ****** 2026-01-01 00:59:50.641841 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.641852 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.641863 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.641874 | orchestrator | 2026-01-01 00:59:50.641885 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-01 00:59:50.641896 | orchestrator | Thursday 01 January 2026 00:52:31 +0000 (0:00:01.061) 0:00:03.838 ****** 2026-01-01 00:59:50.641937 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.641948 | orchestrator | 2026-01-01 00:59:50.641959 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-01 00:59:50.641970 | orchestrator | Thursday 01 January 2026 00:52:32 +0000 (0:00:01.255) 0:00:05.094 ****** 2026-01-01 00:59:50.641981 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.641992 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.642003 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.642062 | orchestrator | 2026-01-01 00:59:50.642078 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-01 00:59:50.642089 | orchestrator | Thursday 01 January 2026 00:52:33 +0000 (0:00:00.816) 0:00:05.911 ****** 2026-01-01 00:59:50.642100 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:59:50.642111 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:59:50.642123 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:59:50.642133 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:59:50.642144 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:59:50.642181 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:59:50.642193 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-01 00:59:50.642205 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-01 00:59:50.642216 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-01 00:59:50.642227 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-01 00:59:50.642238 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-01 00:59:50.642248 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-01 00:59:50.642259 | orchestrator | 2026-01-01 00:59:50.642270 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-01 00:59:50.642281 | orchestrator | Thursday 01 January 2026 00:52:37 +0000 (0:00:04.145) 0:00:10.080 ****** 2026-01-01 00:59:50.642292 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-01 00:59:50.642304 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-01 00:59:50.642315 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-01 00:59:50.642326 | orchestrator | 2026-01-01 00:59:50.642336 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-01 00:59:50.642347 | orchestrator | Thursday 01 January 2026 00:52:39 +0000 (0:00:01.621) 0:00:11.702 ****** 2026-01-01 00:59:50.642358 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-01 00:59:50.642369 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-01 00:59:50.642380 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-01 00:59:50.642391 | orchestrator | 2026-01-01 00:59:50.642402 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-01 00:59:50.642412 | orchestrator | Thursday 01 January 2026 00:52:41 +0000 (0:00:02.250) 0:00:13.953 ****** 2026-01-01 00:59:50.642437 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-01 00:59:50.642448 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.642477 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-01 00:59:50.642489 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.642607 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-01 00:59:50.642620 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.642631 | orchestrator | 2026-01-01 00:59:50.642643 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-01 00:59:50.642654 | orchestrator | Thursday 01 January 2026 00:52:43 +0000 (0:00:01.795) 0:00:15.748 ****** 2026-01-01 00:59:50.642669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.642687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.642721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.642742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.642800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.642824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.642871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.642893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.642906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.642926 | orchestrator | 2026-01-01 00:59:50.642938 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-01 00:59:50.642949 | orchestrator | Thursday 01 January 2026 00:52:46 +0000 (0:00:02.889) 0:00:18.638 ****** 2026-01-01 00:59:50.642960 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.642971 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.642982 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.642992 | orchestrator | 2026-01-01 00:59:50.643003 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-01 00:59:50.643014 | orchestrator | Thursday 01 January 2026 00:52:48 +0000 (0:00:01.758) 0:00:20.397 ****** 2026-01-01 00:59:50.643025 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-01 00:59:50.643036 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-01 00:59:50.643046 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-01 00:59:50.643057 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-01 00:59:50.643068 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-01 00:59:50.643079 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-01 00:59:50.643089 | orchestrator | 2026-01-01 00:59:50.643100 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-01 00:59:50.643111 | orchestrator | Thursday 01 January 2026 00:52:50 +0000 (0:00:02.823) 0:00:23.221 ****** 2026-01-01 00:59:50.643121 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.643132 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.643143 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.643153 | orchestrator | 2026-01-01 00:59:50.643164 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-01 00:59:50.643174 | orchestrator | Thursday 01 January 2026 00:52:53 +0000 (0:00:02.254) 0:00:25.475 ****** 2026-01-01 00:59:50.643185 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.643196 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.643207 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.643217 | orchestrator | 2026-01-01 00:59:50.643228 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-01 00:59:50.643239 | orchestrator | Thursday 01 January 2026 00:52:56 +0000 (0:00:02.912) 0:00:28.387 ****** 2026-01-01 00:59:50.643250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.643276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.643289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.643308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:59:50.643320 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.643331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.643343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.643354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.643366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:59:50.643377 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.643402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.643421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.643432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.643444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:59:50.643455 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.643467 | orchestrator | 2026-01-01 00:59:50.643478 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-01 00:59:50.643489 | orchestrator | Thursday 01 January 2026 00:52:57 +0000 (0:00:01.328) 0:00:29.716 ****** 2026-01-01 00:59:50.643500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.643695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:59:50.643706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.643728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:59:50.643758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.643782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8', '__omit_place_holder__eed1aa9d099deea33127112b45c79809e0f101e8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:59:50.643794 | orchestrator | 2026-01-01 00:59:50.643805 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-01 00:59:50.643816 | orchestrator | Thursday 01 January 2026 00:53:01 +0000 (0:00:04.294) 0:00:34.010 ****** 2026-01-01 00:59:50.643827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.643920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.643932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.643943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.643954 | orchestrator | 2026-01-01 00:59:50.643966 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-01 00:59:50.643977 | orchestrator | Thursday 01 January 2026 00:53:05 +0000 (0:00:04.085) 0:00:38.096 ****** 2026-01-01 00:59:50.643994 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-01 00:59:50.644005 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-01 00:59:50.644016 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-01 00:59:50.644027 | orchestrator | 2026-01-01 00:59:50.644038 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-01 00:59:50.644049 | orchestrator | Thursday 01 January 2026 00:53:08 +0000 (0:00:02.751) 0:00:40.847 ****** 2026-01-01 00:59:50.644060 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-01 00:59:50.644071 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-01 00:59:50.644087 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-01 00:59:50.644098 | orchestrator | 2026-01-01 00:59:50.645309 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-01 00:59:50.645347 | orchestrator | Thursday 01 January 2026 00:53:13 +0000 (0:00:05.391) 0:00:46.239 ****** 2026-01-01 00:59:50.645359 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.645371 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.645381 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.645392 | orchestrator | 2026-01-01 00:59:50.645403 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-01 00:59:50.645414 | orchestrator | Thursday 01 January 2026 00:53:15 +0000 (0:00:01.097) 0:00:47.336 ****** 2026-01-01 00:59:50.645426 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-01 00:59:50.645438 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-01 00:59:50.645448 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-01 00:59:50.645459 | orchestrator | 2026-01-01 00:59:50.645470 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-01 00:59:50.645481 | orchestrator | Thursday 01 January 2026 00:53:17 +0000 (0:00:02.580) 0:00:49.917 ****** 2026-01-01 00:59:50.645492 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-01 00:59:50.645503 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-01 00:59:50.645514 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-01 00:59:50.645612 | orchestrator | 2026-01-01 00:59:50.645623 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-01 00:59:50.645634 | orchestrator | Thursday 01 January 2026 00:53:19 +0000 (0:00:02.225) 0:00:52.143 ****** 2026-01-01 00:59:50.645645 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.645656 | orchestrator | 2026-01-01 00:59:50.645666 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-01 00:59:50.645677 | orchestrator | Thursday 01 January 2026 00:53:20 +0000 (0:00:00.502) 0:00:52.645 ****** 2026-01-01 00:59:50.645688 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-01 00:59:50.645699 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-01 00:59:50.645710 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-01 00:59:50.645721 | orchestrator | 2026-01-01 00:59:50.645732 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-01 00:59:50.645742 | orchestrator | Thursday 01 January 2026 00:53:22 +0000 (0:00:01.797) 0:00:54.442 ****** 2026-01-01 00:59:50.645766 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-01 00:59:50.645777 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-01 00:59:50.645786 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-01 00:59:50.645796 | orchestrator | 2026-01-01 00:59:50.645805 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-01-01 00:59:50.645815 | orchestrator | Thursday 01 January 2026 00:53:25 +0000 (0:00:03.379) 0:00:57.821 ****** 2026-01-01 00:59:50.645824 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.645834 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.645843 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.645853 | orchestrator | 2026-01-01 00:59:50.645863 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-01-01 00:59:50.645872 | orchestrator | Thursday 01 January 2026 00:53:25 +0000 (0:00:00.295) 0:00:58.116 ****** 2026-01-01 00:59:50.645882 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.645894 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.645905 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.645916 | orchestrator | 2026-01-01 00:59:50.646005 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-01 00:59:50.646065 | orchestrator | Thursday 01 January 2026 00:53:26 +0000 (0:00:00.439) 0:00:58.556 ****** 2026-01-01 00:59:50.646161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.646198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.646212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.646224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.646245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.646255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.646265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.646276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.646296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.646307 | orchestrator | 2026-01-01 00:59:50.646317 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-01 00:59:50.646327 | orchestrator | Thursday 01 January 2026 00:53:30 +0000 (0:00:04.004) 0:01:02.560 ****** 2026-01-01 00:59:50.646337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.646347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.646363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.646373 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.646384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.646394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.646404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.646414 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.646434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.646445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.646461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.646471 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.646481 | orchestrator | 2026-01-01 00:59:50.646491 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-01 00:59:50.646501 | orchestrator | Thursday 01 January 2026 00:53:31 +0000 (0:00:01.295) 0:01:03.856 ****** 2026-01-01 00:59:50.646511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.646543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.646554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.646572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.646583 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.646593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.646609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.646619 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.646630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.646640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.646650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.646660 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.646670 | orchestrator | 2026-01-01 00:59:50.646680 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-01 00:59:50.646689 | orchestrator | Thursday 01 January 2026 00:53:33 +0000 (0:00:01.675) 0:01:05.531 ****** 2026-01-01 00:59:50.646699 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-01 00:59:50.646709 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-01 00:59:50.646719 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-01 00:59:50.646729 | orchestrator | 2026-01-01 00:59:50.646739 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-01 00:59:50.646784 | orchestrator | Thursday 01 January 2026 00:53:34 +0000 (0:00:01.642) 0:01:07.174 ****** 2026-01-01 00:59:50.646795 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-01 00:59:50.646814 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-01 00:59:50.646825 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-01 00:59:50.646841 | orchestrator | 2026-01-01 00:59:50.646851 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-01 00:59:50.646860 | orchestrator | Thursday 01 January 2026 00:53:36 +0000 (0:00:01.941) 0:01:09.116 ****** 2026-01-01 00:59:50.646870 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-01 00:59:50.646880 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-01 00:59:50.646889 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-01 00:59:50.646899 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.646909 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-01 00:59:50.646918 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-01 00:59:50.646928 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.646938 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-01 00:59:50.646963 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.646973 | orchestrator | 2026-01-01 00:59:50.646983 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-01 00:59:50.646992 | orchestrator | Thursday 01 January 2026 00:53:38 +0000 (0:00:01.289) 0:01:10.406 ****** 2026-01-01 00:59:50.647003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.647013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.647024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.647034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.647094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.647107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.647117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.647127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.647156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.647167 | orchestrator | 2026-01-01 00:59:50.647177 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-01 00:59:50.647186 | orchestrator | Thursday 01 January 2026 00:53:41 +0000 (0:00:03.317) 0:01:13.723 ****** 2026-01-01 00:59:50.647196 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 00:59:50.647206 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:59:50.647216 | orchestrator | } 2026-01-01 00:59:50.647273 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:59:50.647284 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:59:50.647294 | orchestrator | } 2026-01-01 00:59:50.647304 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:59:50.647313 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:59:50.647323 | orchestrator | } 2026-01-01 00:59:50.647333 | orchestrator | 2026-01-01 00:59:50.647343 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 00:59:50.647352 | orchestrator | Thursday 01 January 2026 00:53:41 +0000 (0:00:00.351) 0:01:14.075 ****** 2026-01-01 00:59:50.647368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.647397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.647408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.647418 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.647428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.647438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.647449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.647458 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.647469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.647484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.647503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.647514 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.647539 | orchestrator | 2026-01-01 00:59:50.647548 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-01 00:59:50.647558 | orchestrator | Thursday 01 January 2026 00:53:43 +0000 (0:00:01.284) 0:01:15.359 ****** 2026-01-01 00:59:50.647568 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.647578 | orchestrator | 2026-01-01 00:59:50.647587 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-01 00:59:50.647597 | orchestrator | Thursday 01 January 2026 00:53:43 +0000 (0:00:00.780) 0:01:16.139 ****** 2026-01-01 00:59:50.647608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.647622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.647633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.647677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.647711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.647748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.647769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647790 | orchestrator | 2026-01-01 00:59:50.647800 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-01 00:59:50.647810 | orchestrator | Thursday 01 January 2026 00:53:47 +0000 (0:00:03.993) 0:01:20.133 ****** 2026-01-01 00:59:50.647820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.647831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.647849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647869 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.647889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.647900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.647911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.647926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.647947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647977 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.647987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.647997 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.648007 | orchestrator | 2026-01-01 00:59:50.648017 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-01 00:59:50.648026 | orchestrator | Thursday 01 January 2026 00:53:48 +0000 (0:00:00.863) 0:01:20.996 ****** 2026-01-01 00:59:50.648037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648064 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.648140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648161 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.648171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648191 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.648200 | orchestrator | 2026-01-01 00:59:50.648210 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-01 00:59:50.648220 | orchestrator | Thursday 01 January 2026 00:53:49 +0000 (0:00:01.038) 0:01:22.034 ****** 2026-01-01 00:59:50.648230 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.648239 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.648249 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.648258 | orchestrator | 2026-01-01 00:59:50.648268 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-01 00:59:50.648277 | orchestrator | Thursday 01 January 2026 00:53:51 +0000 (0:00:01.573) 0:01:23.608 ****** 2026-01-01 00:59:50.648287 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.648296 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.648306 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.648316 | orchestrator | 2026-01-01 00:59:50.648325 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-01 00:59:50.648335 | orchestrator | Thursday 01 January 2026 00:53:53 +0000 (0:00:02.503) 0:01:26.111 ****** 2026-01-01 00:59:50.648345 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.648354 | orchestrator | 2026-01-01 00:59:50.648364 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-01 00:59:50.648373 | orchestrator | Thursday 01 January 2026 00:53:55 +0000 (0:00:01.710) 0:01:27.821 ****** 2026-01-01 00:59:50.648396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.648409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.648448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.648498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648546 | orchestrator | 2026-01-01 00:59:50.648557 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-01 00:59:50.648567 | orchestrator | Thursday 01 January 2026 00:54:01 +0000 (0:00:06.271) 0:01:34.093 ****** 2026-01-01 00:59:50.648578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.648598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648624 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.648635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.648646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648666 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.648685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.648703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.648723 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.648733 | orchestrator | 2026-01-01 00:59:50.648743 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-01 00:59:50.648753 | orchestrator | Thursday 01 January 2026 00:54:02 +0000 (0:00:00.900) 0:01:34.993 ****** 2026-01-01 00:59:50.648764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648785 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.648794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648814 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.648824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.648844 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.648853 | orchestrator | 2026-01-01 00:59:50.648863 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-01 00:59:50.648873 | orchestrator | Thursday 01 January 2026 00:54:04 +0000 (0:00:01.376) 0:01:36.370 ****** 2026-01-01 00:59:50.648882 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.648958 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.648968 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.648978 | orchestrator | 2026-01-01 00:59:50.648993 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-01 00:59:50.649003 | orchestrator | Thursday 01 January 2026 00:54:05 +0000 (0:00:01.593) 0:01:37.963 ****** 2026-01-01 00:59:50.649012 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.649022 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.649032 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.649041 | orchestrator | 2026-01-01 00:59:50.649050 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-01 00:59:50.649060 | orchestrator | Thursday 01 January 2026 00:54:08 +0000 (0:00:02.758) 0:01:40.722 ****** 2026-01-01 00:59:50.649070 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.649093 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.649103 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.649113 | orchestrator | 2026-01-01 00:59:50.649128 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-01 00:59:50.649138 | orchestrator | Thursday 01 January 2026 00:54:08 +0000 (0:00:00.359) 0:01:41.082 ****** 2026-01-01 00:59:50.649148 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.649158 | orchestrator | 2026-01-01 00:59:50.649167 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-01 00:59:50.649177 | orchestrator | Thursday 01 January 2026 00:54:09 +0000 (0:00:00.964) 0:01:42.047 ****** 2026-01-01 00:59:50.649187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-01 00:59:50.649199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-01 00:59:50.649210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-01 00:59:50.649220 | orchestrator | 2026-01-01 00:59:50.649235 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-01 00:59:50.649245 | orchestrator | Thursday 01 January 2026 00:54:15 +0000 (0:00:05.282) 0:01:47.329 ****** 2026-01-01 00:59:50.649255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-01 00:59:50.649265 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.649285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-01 00:59:50.649296 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.649306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-01 00:59:50.649316 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.649326 | orchestrator | 2026-01-01 00:59:50.649336 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-01 00:59:50.649345 | orchestrator | Thursday 01 January 2026 00:54:17 +0000 (0:00:02.931) 0:01:50.260 ****** 2026-01-01 00:59:50.649355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:59:50.649421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:59:50.649440 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.649450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:59:50.649461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:59:50.649470 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.649480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:59:50.649500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:59:50.649511 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.649538 | orchestrator | 2026-01-01 00:59:50.649549 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-01 00:59:50.649558 | orchestrator | Thursday 01 January 2026 00:54:20 +0000 (0:00:02.304) 0:01:52.564 ****** 2026-01-01 00:59:50.649568 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.649578 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.649587 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.649597 | orchestrator | 2026-01-01 00:59:50.649607 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-01 00:59:50.649616 | orchestrator | Thursday 01 January 2026 00:54:20 +0000 (0:00:00.437) 0:01:53.001 ****** 2026-01-01 00:59:50.649626 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.649636 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.649645 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.649655 | orchestrator | 2026-01-01 00:59:50.649665 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-01 00:59:50.649674 | orchestrator | Thursday 01 January 2026 00:54:22 +0000 (0:00:01.481) 0:01:54.483 ****** 2026-01-01 00:59:50.649684 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.649693 | orchestrator | 2026-01-01 00:59:50.649703 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-01 00:59:50.649713 | orchestrator | Thursday 01 January 2026 00:54:23 +0000 (0:00:00.944) 0:01:55.427 ****** 2026-01-01 00:59:50.649723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.649741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.649752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.649774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.649786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.649797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.649812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.649823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.650284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650328 | orchestrator | 2026-01-01 00:59:50.650338 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-01 00:59:50.650348 | orchestrator | Thursday 01 January 2026 00:54:27 +0000 (0:00:04.692) 0:02:00.120 ****** 2026-01-01 00:59:50.650358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.650373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650417 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.650428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.650439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650480 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.650490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.650506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.650560 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.650570 | orchestrator | 2026-01-01 00:59:50.650579 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-01 00:59:50.650589 | orchestrator | Thursday 01 January 2026 00:54:29 +0000 (0:00:01.248) 0:02:01.369 ****** 2026-01-01 00:59:50.650600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.650619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.650630 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.650640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.650650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.650660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.650681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.650691 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.650701 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.650711 | orchestrator | 2026-01-01 00:59:50.650720 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-01 00:59:50.650730 | orchestrator | Thursday 01 January 2026 00:54:30 +0000 (0:00:01.188) 0:02:02.557 ****** 2026-01-01 00:59:50.650740 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.650749 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.650759 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.650768 | orchestrator | 2026-01-01 00:59:50.650778 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-01 00:59:50.650788 | orchestrator | Thursday 01 January 2026 00:54:32 +0000 (0:00:01.781) 0:02:04.339 ****** 2026-01-01 00:59:50.650882 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.650894 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.650905 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.650917 | orchestrator | 2026-01-01 00:59:50.650929 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-01 00:59:50.650940 | orchestrator | Thursday 01 January 2026 00:54:34 +0000 (0:00:02.228) 0:02:06.567 ****** 2026-01-01 00:59:50.650965 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.650977 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.650987 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.650999 | orchestrator | 2026-01-01 00:59:50.651010 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-01 00:59:50.651021 | orchestrator | Thursday 01 January 2026 00:54:34 +0000 (0:00:00.328) 0:02:06.896 ****** 2026-01-01 00:59:50.651032 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.651043 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.651054 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.651065 | orchestrator | 2026-01-01 00:59:50.651077 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-01 00:59:50.651089 | orchestrator | Thursday 01 January 2026 00:54:34 +0000 (0:00:00.315) 0:02:07.212 ****** 2026-01-01 00:59:50.651100 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.651111 | orchestrator | 2026-01-01 00:59:50.651123 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-01 00:59:50.651134 | orchestrator | Thursday 01 January 2026 00:54:35 +0000 (0:00:01.051) 0:02:08.263 ****** 2026-01-01 00:59:50.651147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.651170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:59:50.651190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.651227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:59:50.651274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.651360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:59:50.651371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651477 | orchestrator | 2026-01-01 00:59:50.651486 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-01 00:59:50.651496 | orchestrator | Thursday 01 January 2026 00:54:40 +0000 (0:00:04.511) 0:02:12.775 ****** 2026-01-01 00:59:50.651507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.651565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:59:50.651577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651710 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.651720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.651731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:59:50.651747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.651789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:59:50.651809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651845 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.651863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.651904 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.651914 | orchestrator | 2026-01-01 00:59:50.651929 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-01 00:59:50.651939 | orchestrator | Thursday 01 January 2026 00:54:41 +0000 (0:00:00.920) 0:02:13.695 ****** 2026-01-01 00:59:50.651949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.651960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.652003 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.652014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.652024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.652034 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.652044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.652054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.652064 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.652099 | orchestrator | 2026-01-01 00:59:50.652112 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-01 00:59:50.652120 | orchestrator | Thursday 01 January 2026 00:54:42 +0000 (0:00:01.500) 0:02:15.195 ****** 2026-01-01 00:59:50.652128 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.652136 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.652144 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.652152 | orchestrator | 2026-01-01 00:59:50.652160 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-01 00:59:50.652168 | orchestrator | Thursday 01 January 2026 00:54:44 +0000 (0:00:01.434) 0:02:16.630 ****** 2026-01-01 00:59:50.652176 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.652183 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.652191 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.652199 | orchestrator | 2026-01-01 00:59:50.652207 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-01 00:59:50.652231 | orchestrator | Thursday 01 January 2026 00:54:46 +0000 (0:00:02.266) 0:02:18.897 ****** 2026-01-01 00:59:50.652240 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.652248 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.652256 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.652263 | orchestrator | 2026-01-01 00:59:50.652271 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-01 00:59:50.652279 | orchestrator | Thursday 01 January 2026 00:54:46 +0000 (0:00:00.325) 0:02:19.223 ****** 2026-01-01 00:59:50.652287 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.652295 | orchestrator | 2026-01-01 00:59:50.652303 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-01 00:59:50.652311 | orchestrator | Thursday 01 January 2026 00:54:47 +0000 (0:00:01.070) 0:02:20.293 ****** 2026-01-01 00:59:50.652320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-01 00:59:50.653740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.653773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-01 00:59:50.653804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.653813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-01 00:59:50.653829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.653837 | orchestrator | 2026-01-01 00:59:50.653848 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-01 00:59:50.653855 | orchestrator | Thursday 01 January 2026 00:54:52 +0000 (0:00:04.374) 0:02:24.668 ****** 2026-01-01 00:59:50.653863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-01 00:59:50.653875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.653883 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.653903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-01 00:59:50.653915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.653923 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.654322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-01 00:59:50.654348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.654356 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.654363 | orchestrator | 2026-01-01 00:59:50.654370 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-01 00:59:50.654377 | orchestrator | Thursday 01 January 2026 00:54:55 +0000 (0:00:03.527) 0:02:28.195 ****** 2026-01-01 00:59:50.654384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:59:50.654412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:59:50.654421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:59:50.654434 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.654441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:59:50.654448 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.654455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:59:50.654462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:59:50.654469 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.654475 | orchestrator | 2026-01-01 00:59:50.654482 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-01 00:59:50.654489 | orchestrator | Thursday 01 January 2026 00:54:59 +0000 (0:00:03.829) 0:02:32.024 ****** 2026-01-01 00:59:50.654703 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.654714 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.654720 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.654727 | orchestrator | 2026-01-01 00:59:50.654734 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-01 00:59:50.654740 | orchestrator | Thursday 01 January 2026 00:55:01 +0000 (0:00:01.387) 0:02:33.411 ****** 2026-01-01 00:59:50.654747 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.654754 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.654760 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.654767 | orchestrator | 2026-01-01 00:59:50.654773 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-01 00:59:50.654780 | orchestrator | Thursday 01 January 2026 00:55:03 +0000 (0:00:02.157) 0:02:35.569 ****** 2026-01-01 00:59:50.654787 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.654793 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.654800 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.654806 | orchestrator | 2026-01-01 00:59:50.654813 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-01 00:59:50.654820 | orchestrator | Thursday 01 January 2026 00:55:03 +0000 (0:00:00.389) 0:02:35.959 ****** 2026-01-01 00:59:50.654826 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.654833 | orchestrator | 2026-01-01 00:59:50.654840 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-01 00:59:50.654846 | orchestrator | Thursday 01 January 2026 00:55:04 +0000 (0:00:00.951) 0:02:36.910 ****** 2026-01-01 00:59:50.654875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.654889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.654896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.654903 | orchestrator | 2026-01-01 00:59:50.654910 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-01 00:59:50.654917 | orchestrator | Thursday 01 January 2026 00:55:09 +0000 (0:00:04.427) 0:02:41.337 ****** 2026-01-01 00:59:50.654924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.654931 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.654938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.654949 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.654975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.654983 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.654990 | orchestrator | 2026-01-01 00:59:50.654997 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-01 00:59:50.655003 | orchestrator | Thursday 01 January 2026 00:55:09 +0000 (0:00:00.424) 0:02:41.762 ****** 2026-01-01 00:59:50.655010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.655018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.655025 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.655032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.655038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.655045 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.655052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.655059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.655065 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.655072 | orchestrator | 2026-01-01 00:59:50.655079 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-01 00:59:50.655085 | orchestrator | Thursday 01 January 2026 00:55:10 +0000 (0:00:00.716) 0:02:42.479 ****** 2026-01-01 00:59:50.655092 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.655140 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.655150 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.655156 | orchestrator | 2026-01-01 00:59:50.655163 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-01 00:59:50.655193 | orchestrator | Thursday 01 January 2026 00:55:11 +0000 (0:00:01.658) 0:02:44.138 ****** 2026-01-01 00:59:50.655200 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.655207 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.655214 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.655220 | orchestrator | 2026-01-01 00:59:50.655227 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-01 00:59:50.655239 | orchestrator | Thursday 01 January 2026 00:55:14 +0000 (0:00:02.159) 0:02:46.297 ****** 2026-01-01 00:59:50.655246 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.655253 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.655260 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.655266 | orchestrator | 2026-01-01 00:59:50.655273 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-01 00:59:50.655280 | orchestrator | Thursday 01 January 2026 00:55:14 +0000 (0:00:00.347) 0:02:46.645 ****** 2026-01-01 00:59:50.655287 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.655293 | orchestrator | 2026-01-01 00:59:50.655300 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-01 00:59:50.655306 | orchestrator | Thursday 01 January 2026 00:55:15 +0000 (0:00:01.032) 0:02:47.677 ****** 2026-01-01 00:59:50.655337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 00:59:50.655347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 00:59:50.655381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 00:59:50.655391 | orchestrator | 2026-01-01 00:59:50.655399 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-01 00:59:50.655407 | orchestrator | Thursday 01 January 2026 00:55:20 +0000 (0:00:04.813) 0:02:52.491 ****** 2026-01-01 00:59:50.655435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 00:59:50.655453 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.655462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 00:59:50.656081 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.656121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 00:59:50.656130 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.656136 | orchestrator | 2026-01-01 00:59:50.656143 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-01 00:59:50.656149 | orchestrator | Thursday 01 January 2026 00:55:20 +0000 (0:00:00.791) 0:02:53.283 ****** 2026-01-01 00:59:50.656156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-01 00:59:50.656164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:59:50.656171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-01 00:59:50.656179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:59:50.656190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-01 00:59:50.656196 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.656203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-01 00:59:50.656210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:59:50.656216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-01 00:59:50.656223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:59:50.656229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-01 00:59:50.656235 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.656258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-01 00:59:50.656265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:59:50.656272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-01 00:59:50.656278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:59:50.656285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-01 00:59:50.656291 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.656793 | orchestrator | 2026-01-01 00:59:50.656818 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-01 00:59:50.656832 | orchestrator | Thursday 01 January 2026 00:55:22 +0000 (0:00:01.661) 0:02:54.944 ****** 2026-01-01 00:59:50.656838 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.656845 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.656851 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.656857 | orchestrator | 2026-01-01 00:59:50.656863 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-01 00:59:50.656869 | orchestrator | Thursday 01 January 2026 00:55:24 +0000 (0:00:02.001) 0:02:56.945 ****** 2026-01-01 00:59:50.656875 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.656882 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.656888 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.656894 | orchestrator | 2026-01-01 00:59:50.656901 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-01 00:59:50.656907 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:02.337) 0:02:59.283 ****** 2026-01-01 00:59:50.656913 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.656919 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.656925 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.656931 | orchestrator | 2026-01-01 00:59:50.656937 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-01 00:59:50.656944 | orchestrator | Thursday 01 January 2026 00:55:27 +0000 (0:00:00.386) 0:02:59.670 ****** 2026-01-01 00:59:50.656950 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.656956 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.656962 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.656968 | orchestrator | 2026-01-01 00:59:50.656974 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-01 00:59:50.656981 | orchestrator | Thursday 01 January 2026 00:55:28 +0000 (0:00:00.648) 0:03:00.318 ****** 2026-01-01 00:59:50.656987 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.656993 | orchestrator | 2026-01-01 00:59:50.656999 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-01 00:59:50.657005 | orchestrator | Thursday 01 January 2026 00:55:29 +0000 (0:00:01.724) 0:03:02.042 ****** 2026-01-01 00:59:50.657013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 00:59:50.657046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:50.657054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:50.657065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 00:59:50.657072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:50.657079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:50.657102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 00:59:50.657110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:50.657120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:50.657127 | orchestrator | 2026-01-01 00:59:50.657133 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-01 00:59:50.657139 | orchestrator | Thursday 01 January 2026 00:55:35 +0000 (0:00:06.157) 0:03:08.200 ****** 2026-01-01 00:59:50.657146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 00:59:50.657153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:50.657160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:50.657167 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.657190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 00:59:50.657263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:50.657271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:50.657277 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.657284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 00:59:50.657291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:50.657480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:50.657496 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.657503 | orchestrator | 2026-01-01 00:59:50.657509 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-01 00:59:50.657535 | orchestrator | Thursday 01 January 2026 00:55:36 +0000 (0:00:00.924) 0:03:09.124 ****** 2026-01-01 00:59:50.657543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-01 00:59:50.657550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-01 00:59:50.657558 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.657564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-01 00:59:50.657571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-01 00:59:50.657577 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.657584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-01 00:59:50.657590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-01 00:59:50.657597 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.657603 | orchestrator | 2026-01-01 00:59:50.657609 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-01 00:59:50.657616 | orchestrator | Thursday 01 January 2026 00:55:37 +0000 (0:00:00.868) 0:03:09.993 ****** 2026-01-01 00:59:50.657622 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.657628 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.657634 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.657641 | orchestrator | 2026-01-01 00:59:50.657647 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-01 00:59:50.657653 | orchestrator | Thursday 01 January 2026 00:55:39 +0000 (0:00:01.408) 0:03:11.402 ****** 2026-01-01 00:59:50.657659 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.657665 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.657672 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.657678 | orchestrator | 2026-01-01 00:59:50.657684 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-01 00:59:50.657690 | orchestrator | Thursday 01 January 2026 00:55:41 +0000 (0:00:02.251) 0:03:13.654 ****** 2026-01-01 00:59:50.657701 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.657707 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.657713 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.657719 | orchestrator | 2026-01-01 00:59:50.657725 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-01 00:59:50.657731 | orchestrator | Thursday 01 January 2026 00:55:41 +0000 (0:00:00.331) 0:03:13.985 ****** 2026-01-01 00:59:50.657738 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.657744 | orchestrator | 2026-01-01 00:59:50.657750 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-01 00:59:50.657756 | orchestrator | Thursday 01 January 2026 00:55:42 +0000 (0:00:01.257) 0:03:15.242 ****** 2026-01-01 00:59:50.657782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.657790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.657798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.657805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.657818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.657839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.657894 | orchestrator | 2026-01-01 00:59:50.657902 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-01 00:59:50.657908 | orchestrator | Thursday 01 January 2026 00:55:47 +0000 (0:00:04.583) 0:03:19.826 ****** 2026-01-01 00:59:50.657915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.657922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.657933 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.657940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.657969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.657976 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.657983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.657990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.657997 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.658293 | orchestrator | 2026-01-01 00:59:50.658311 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-01 00:59:50.658328 | orchestrator | Thursday 01 January 2026 00:55:48 +0000 (0:00:01.036) 0:03:20.863 ****** 2026-01-01 00:59:50.658337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.658373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.658381 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.658435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.658443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.658451 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.658459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.658466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.658473 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.658479 | orchestrator | 2026-01-01 00:59:50.658489 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-01 00:59:50.658496 | orchestrator | Thursday 01 January 2026 00:55:49 +0000 (0:00:01.275) 0:03:22.138 ****** 2026-01-01 00:59:50.658540 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.658548 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.658554 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.658561 | orchestrator | 2026-01-01 00:59:50.658567 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-01 00:59:50.658596 | orchestrator | Thursday 01 January 2026 00:55:52 +0000 (0:00:02.208) 0:03:24.347 ****** 2026-01-01 00:59:50.658603 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.658640 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.658647 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.658653 | orchestrator | 2026-01-01 00:59:50.658659 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-01 00:59:50.658666 | orchestrator | Thursday 01 January 2026 00:55:54 +0000 (0:00:02.297) 0:03:26.644 ****** 2026-01-01 00:59:50.658672 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.658678 | orchestrator | 2026-01-01 00:59:50.658684 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-01 00:59:50.658691 | orchestrator | Thursday 01 January 2026 00:55:55 +0000 (0:00:01.586) 0:03:28.231 ****** 2026-01-01 00:59:50.658701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.658760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.658811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.658823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658884 | orchestrator | 2026-01-01 00:59:50.658891 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-01 00:59:50.658897 | orchestrator | Thursday 01 January 2026 00:56:03 +0000 (0:00:07.257) 0:03:35.488 ****** 2026-01-01 00:59:50.658904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.658910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.658917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658965 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.658972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.658985 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.658991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.659177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.659188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.659200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.659207 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.659213 | orchestrator | 2026-01-01 00:59:50.659220 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-01 00:59:50.659226 | orchestrator | Thursday 01 January 2026 00:56:04 +0000 (0:00:01.116) 0:03:36.605 ****** 2026-01-01 00:59:50.659233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.659240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.659246 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.659253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.659259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.659266 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.659272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.659278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.659285 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.659291 | orchestrator | 2026-01-01 00:59:50.659464 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-01 00:59:50.659478 | orchestrator | Thursday 01 January 2026 00:56:05 +0000 (0:00:01.231) 0:03:37.837 ****** 2026-01-01 00:59:50.659485 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.659491 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.659497 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.659503 | orchestrator | 2026-01-01 00:59:50.659510 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-01 00:59:50.659575 | orchestrator | Thursday 01 January 2026 00:56:06 +0000 (0:00:01.142) 0:03:38.980 ****** 2026-01-01 00:59:50.659588 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.659598 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.659604 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.659611 | orchestrator | 2026-01-01 00:59:50.659617 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-01 00:59:50.659692 | orchestrator | Thursday 01 January 2026 00:56:08 +0000 (0:00:01.979) 0:03:40.959 ****** 2026-01-01 00:59:50.659722 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.659729 | orchestrator | 2026-01-01 00:59:50.659735 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-01 00:59:50.659741 | orchestrator | Thursday 01 January 2026 00:56:10 +0000 (0:00:01.370) 0:03:42.330 ****** 2026-01-01 00:59:50.659748 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-01 00:59:50.659754 | orchestrator | 2026-01-01 00:59:50.659760 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-01 00:59:50.659767 | orchestrator | Thursday 01 January 2026 00:56:13 +0000 (0:00:03.564) 0:03:45.895 ****** 2026-01-01 00:59:50.659775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:59:50.659784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:59:50.659790 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.659814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:59:50.659825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:59:50.659831 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.659837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:59:50.659844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:59:50.659871 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.659877 | orchestrator | 2026-01-01 00:59:50.659883 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-01 00:59:50.659888 | orchestrator | Thursday 01 January 2026 00:56:17 +0000 (0:00:04.084) 0:03:49.979 ****** 2026-01-01 00:59:50.659912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:59:50.659937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:59:50.659943 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.659951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:59:50.660011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:59:50.660019 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.660026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:59:50.660034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:59:50.660041 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.660052 | orchestrator | 2026-01-01 00:59:50.660058 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-01 00:59:50.660448 | orchestrator | Thursday 01 January 2026 00:56:21 +0000 (0:00:03.834) 0:03:53.814 ****** 2026-01-01 00:59:50.660467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:59:50.660506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:59:50.660514 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.660539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:59:50.660545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:59:50.660551 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.660556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:59:50.660562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:59:50.660575 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.660581 | orchestrator | 2026-01-01 00:59:50.660586 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-01 00:59:50.660592 | orchestrator | Thursday 01 January 2026 00:56:24 +0000 (0:00:02.494) 0:03:56.308 ****** 2026-01-01 00:59:50.660597 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.660603 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.660608 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.660653 | orchestrator | 2026-01-01 00:59:50.660660 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-01 00:59:50.660666 | orchestrator | Thursday 01 January 2026 00:56:26 +0000 (0:00:02.135) 0:03:58.444 ****** 2026-01-01 00:59:50.660671 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.660677 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.660682 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.660687 | orchestrator | 2026-01-01 00:59:50.660693 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-01 00:59:50.660698 | orchestrator | Thursday 01 January 2026 00:56:28 +0000 (0:00:01.862) 0:04:00.306 ****** 2026-01-01 00:59:50.660704 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.660709 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.660714 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.660720 | orchestrator | 2026-01-01 00:59:50.660725 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-01 00:59:50.660731 | orchestrator | Thursday 01 January 2026 00:56:28 +0000 (0:00:00.354) 0:04:00.661 ****** 2026-01-01 00:59:50.660736 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.660741 | orchestrator | 2026-01-01 00:59:50.660747 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-01 00:59:50.660756 | orchestrator | Thursday 01 January 2026 00:56:29 +0000 (0:00:01.453) 0:04:02.115 ****** 2026-01-01 00:59:50.660803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-01 00:59:50.660811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-01 00:59:50.660817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-01 00:59:50.660828 | orchestrator | 2026-01-01 00:59:50.660834 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-01 00:59:50.660840 | orchestrator | Thursday 01 January 2026 00:56:31 +0000 (0:00:01.729) 0:04:03.844 ****** 2026-01-01 00:59:50.660866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-01 00:59:50.660872 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.660878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-01 00:59:50.660883 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.660928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-01 00:59:50.660936 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.660942 | orchestrator | 2026-01-01 00:59:50.660947 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-01 00:59:50.660952 | orchestrator | Thursday 01 January 2026 00:56:31 +0000 (0:00:00.409) 0:04:04.254 ****** 2026-01-01 00:59:50.660958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-01 00:59:50.660965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-01 00:59:50.660970 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.660976 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.660986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-01 00:59:50.660992 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.660997 | orchestrator | 2026-01-01 00:59:50.661002 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-01 00:59:50.661072 | orchestrator | Thursday 01 January 2026 00:56:32 +0000 (0:00:00.992) 0:04:05.246 ****** 2026-01-01 00:59:50.661079 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.661085 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.661090 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.661096 | orchestrator | 2026-01-01 00:59:50.661101 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-01 00:59:50.661107 | orchestrator | Thursday 01 January 2026 00:56:33 +0000 (0:00:00.549) 0:04:05.795 ****** 2026-01-01 00:59:50.661112 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.661118 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.661123 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.661129 | orchestrator | 2026-01-01 00:59:50.661134 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-01 00:59:50.661140 | orchestrator | Thursday 01 January 2026 00:56:34 +0000 (0:00:01.323) 0:04:07.119 ****** 2026-01-01 00:59:50.661145 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.661151 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.661156 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.661162 | orchestrator | 2026-01-01 00:59:50.661167 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-01 00:59:50.661173 | orchestrator | Thursday 01 January 2026 00:56:35 +0000 (0:00:00.394) 0:04:07.513 ****** 2026-01-01 00:59:50.661178 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.661871 | orchestrator | 2026-01-01 00:59:50.661888 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-01 00:59:50.661928 | orchestrator | Thursday 01 January 2026 00:56:36 +0000 (0:00:01.611) 0:04:09.124 ****** 2026-01-01 00:59:50.661937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.662001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-01 00:59:50.662045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-01 00:59:50.662052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.662107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.662116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-01 00:59:50.662126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.662132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.662201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.662211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-01 00:59:50.662221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.662227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-01 00:59:50.662311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-01 00:59:50.662320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.662330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-01 00:59:50.662390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-01 00:59:50.662407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.662416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.662445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.662454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.662466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.662546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-01 00:59:50.662560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-01 00:59:50.662569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.662578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.662586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-01 00:59:50.662890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-01 00:59:50.662901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.662908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.662913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.662975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.662991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.662997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.663003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.663009 | orchestrator | 2026-01-01 00:59:50.663015 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-01 00:59:50.663021 | orchestrator | Thursday 01 January 2026 00:56:42 +0000 (0:00:05.760) 0:04:14.885 ****** 2026-01-01 00:59:50.663026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.663074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-01 00:59:50.663088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-01 00:59:50.663095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.663113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.663161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-01 00:59:50.663173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.663183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-01 00:59:50.663195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.663205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.663331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.663338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.663343 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.663350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-01 00:59:50.663416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-01 00:59:50.663423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.663429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.663491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-01 00:59:50.663499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.663505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-01 00:59:50.663511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-01 00:59:50.663658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.663766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.663772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.663794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-01 00:59:50.663804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-01 00:59:50.663810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.663861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.663870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.663890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-01 00:59:50.663895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.663925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-01 00:59:50.663932 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.663937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.663943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:59:50.663948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:59:50.663956 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.663961 | orchestrator | 2026-01-01 00:59:50.663966 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-01 00:59:50.663972 | orchestrator | Thursday 01 January 2026 00:56:44 +0000 (0:00:01.957) 0:04:16.843 ****** 2026-01-01 00:59:50.663977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.663985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.663990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.663996 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.664002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664007 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.664014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664048 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.664053 | orchestrator | 2026-01-01 00:59:50.664058 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-01 00:59:50.664063 | orchestrator | Thursday 01 January 2026 00:56:46 +0000 (0:00:02.335) 0:04:19.178 ****** 2026-01-01 00:59:50.664068 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.664072 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.664077 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.664082 | orchestrator | 2026-01-01 00:59:50.664087 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-01 00:59:50.664092 | orchestrator | Thursday 01 January 2026 00:56:48 +0000 (0:00:01.273) 0:04:20.452 ****** 2026-01-01 00:59:50.664096 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.664101 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.664106 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.664111 | orchestrator | 2026-01-01 00:59:50.664116 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-01 00:59:50.664120 | orchestrator | Thursday 01 January 2026 00:56:50 +0000 (0:00:02.455) 0:04:22.908 ****** 2026-01-01 00:59:50.664125 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.664130 | orchestrator | 2026-01-01 00:59:50.664154 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-01 00:59:50.664163 | orchestrator | Thursday 01 January 2026 00:56:52 +0000 (0:00:01.607) 0:04:24.516 ****** 2026-01-01 00:59:50.664168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-01 00:59:50.664183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-01 00:59:50.664226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-01 00:59:50.664234 | orchestrator | 2026-01-01 00:59:50.664239 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-01 00:59:50.664244 | orchestrator | Thursday 01 January 2026 00:56:55 +0000 (0:00:03.566) 0:04:28.083 ****** 2026-01-01 00:59:50.664250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-01 00:59:50.664258 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.664264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-01 00:59:50.664269 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.664274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-01 00:59:50.664279 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.664284 | orchestrator | 2026-01-01 00:59:50.664289 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-01 00:59:50.664294 | orchestrator | Thursday 01 January 2026 00:56:56 +0000 (0:00:00.805) 0:04:28.888 ****** 2026-01-01 00:59:50.664301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.664338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.664345 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.664350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.664359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.664364 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.664369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.664374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.664379 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.664384 | orchestrator | 2026-01-01 00:59:50.664388 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-01 00:59:50.664393 | orchestrator | Thursday 01 January 2026 00:56:57 +0000 (0:00:01.160) 0:04:30.049 ****** 2026-01-01 00:59:50.664398 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.664403 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.664408 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.664421 | orchestrator | 2026-01-01 00:59:50.664426 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-01 00:59:50.664431 | orchestrator | Thursday 01 January 2026 00:56:59 +0000 (0:00:01.467) 0:04:31.517 ****** 2026-01-01 00:59:50.664435 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.664440 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.664445 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.664450 | orchestrator | 2026-01-01 00:59:50.664455 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-01 00:59:50.664459 | orchestrator | Thursday 01 January 2026 00:57:01 +0000 (0:00:02.474) 0:04:33.991 ****** 2026-01-01 00:59:50.664464 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.664469 | orchestrator | 2026-01-01 00:59:50.664474 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-01 00:59:50.664479 | orchestrator | Thursday 01 January 2026 00:57:03 +0000 (0:00:01.361) 0:04:35.353 ****** 2026-01-01 00:59:50.664484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.664538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.664551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.664557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.664563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.664620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.664636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664673 | orchestrator | 2026-01-01 00:59:50.664679 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-01 00:59:50.664684 | orchestrator | Thursday 01 January 2026 00:57:10 +0000 (0:00:07.186) 0:04:42.540 ****** 2026-01-01 00:59:50.664689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.664695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.664733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664752 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.664777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.664784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.664789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.664795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.664829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664834 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.664840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.664850 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.664855 | orchestrator | 2026-01-01 00:59:50.664860 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-01 00:59:50.664865 | orchestrator | Thursday 01 January 2026 00:57:11 +0000 (0:00:00.822) 0:04:43.362 ****** 2026-01-01 00:59:50.664878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664903 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.664908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664950 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.664955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.664969 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.664974 | orchestrator | 2026-01-01 00:59:50.664979 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-01 00:59:50.664984 | orchestrator | Thursday 01 January 2026 00:57:12 +0000 (0:00:01.066) 0:04:44.429 ****** 2026-01-01 00:59:50.664989 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.664994 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.664998 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.665003 | orchestrator | 2026-01-01 00:59:50.665008 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-01 00:59:50.665013 | orchestrator | Thursday 01 January 2026 00:57:13 +0000 (0:00:01.591) 0:04:46.021 ****** 2026-01-01 00:59:50.665018 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.665023 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.665027 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.665032 | orchestrator | 2026-01-01 00:59:50.665037 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-01 00:59:50.665042 | orchestrator | Thursday 01 January 2026 00:57:15 +0000 (0:00:02.072) 0:04:48.093 ****** 2026-01-01 00:59:50.665050 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.665055 | orchestrator | 2026-01-01 00:59:50.665060 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-01 00:59:50.665065 | orchestrator | Thursday 01 January 2026 00:57:18 +0000 (0:00:02.462) 0:04:50.556 ****** 2026-01-01 00:59:50.665070 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-2, testbed-node-1, testbed-node-0 => (item=nova-novncproxy) 2026-01-01 00:59:50.665075 | orchestrator | 2026-01-01 00:59:50.665080 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-01 00:59:50.665085 | orchestrator | Thursday 01 January 2026 00:57:19 +0000 (0:00:01.397) 0:04:51.954 ****** 2026-01-01 00:59:50.665090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-01 00:59:50.665096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-01 00:59:50.665116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-01 00:59:50.665122 | orchestrator | 2026-01-01 00:59:50.665127 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-01 00:59:50.665133 | orchestrator | Thursday 01 January 2026 00:57:24 +0000 (0:00:04.872) 0:04:56.827 ****** 2026-01-01 00:59:50.665138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:59:50.665143 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.665148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:59:50.665153 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.665158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:59:50.665166 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.665170 | orchestrator | 2026-01-01 00:59:50.665176 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-01 00:59:50.665180 | orchestrator | Thursday 01 January 2026 00:57:25 +0000 (0:00:01.234) 0:04:58.061 ****** 2026-01-01 00:59:50.665185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:59:50.665191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:59:50.665196 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.665201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:59:50.665206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:59:50.665211 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.665216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:59:50.665221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:59:50.665226 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.665230 | orchestrator | 2026-01-01 00:59:50.665235 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-01 00:59:50.665240 | orchestrator | Thursday 01 January 2026 00:57:27 +0000 (0:00:01.863) 0:04:59.924 ****** 2026-01-01 00:59:50.665245 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.665251 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.665257 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.665262 | orchestrator | 2026-01-01 00:59:50.665270 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-01 00:59:50.665289 | orchestrator | Thursday 01 January 2026 00:57:31 +0000 (0:00:04.188) 0:05:04.113 ****** 2026-01-01 00:59:50.665296 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.665302 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.665307 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.665313 | orchestrator | 2026-01-01 00:59:50.665318 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-01 00:59:50.665324 | orchestrator | Thursday 01 January 2026 00:57:35 +0000 (0:00:03.652) 0:05:07.765 ****** 2026-01-01 00:59:50.665330 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-01 00:59:50.665335 | orchestrator | 2026-01-01 00:59:50.665341 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-01 00:59:50.665347 | orchestrator | Thursday 01 January 2026 00:57:36 +0000 (0:00:01.221) 0:05:08.987 ****** 2026-01-01 00:59:50.665355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:59:50.665361 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.665367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:59:50.665373 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.665379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:59:50.665385 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.665404 | orchestrator | 2026-01-01 00:59:50.665410 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-01 00:59:50.665416 | orchestrator | Thursday 01 January 2026 00:57:37 +0000 (0:00:01.224) 0:05:10.211 ****** 2026-01-01 00:59:50.665421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:59:50.665427 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.665433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:59:50.665439 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.665460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:59:50.665467 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.665478 | orchestrator | 2026-01-01 00:59:50.665484 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-01 00:59:50.665490 | orchestrator | Thursday 01 January 2026 00:57:39 +0000 (0:00:01.174) 0:05:11.386 ****** 2026-01-01 00:59:50.665496 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.665501 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.665507 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.665513 | orchestrator | 2026-01-01 00:59:50.665553 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-01 00:59:50.665559 | orchestrator | Thursday 01 January 2026 00:57:40 +0000 (0:00:01.757) 0:05:13.143 ****** 2026-01-01 00:59:50.665565 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.665571 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.665577 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.665582 | orchestrator | 2026-01-01 00:59:50.665588 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-01 00:59:50.665594 | orchestrator | Thursday 01 January 2026 00:57:43 +0000 (0:00:02.679) 0:05:15.823 ****** 2026-01-01 00:59:50.665600 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.665605 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.665611 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.665615 | orchestrator | 2026-01-01 00:59:50.665620 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-01 00:59:50.665625 | orchestrator | Thursday 01 January 2026 00:57:46 +0000 (0:00:03.333) 0:05:19.157 ****** 2026-01-01 00:59:50.665630 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-01 00:59:50.665635 | orchestrator | 2026-01-01 00:59:50.665640 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-01 00:59:50.665645 | orchestrator | Thursday 01 January 2026 00:57:47 +0000 (0:00:00.945) 0:05:20.102 ****** 2026-01-01 00:59:50.665650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:59:50.665655 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.665660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:59:50.665665 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.665670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:59:50.665675 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.665680 | orchestrator | 2026-01-01 00:59:50.665685 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-01 00:59:50.665695 | orchestrator | Thursday 01 January 2026 00:57:49 +0000 (0:00:01.696) 0:05:21.798 ****** 2026-01-01 00:59:50.665700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:59:50.665705 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.665729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:59:50.665735 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.665740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:59:50.665745 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.665750 | orchestrator | 2026-01-01 00:59:50.665755 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-01 00:59:50.665760 | orchestrator | Thursday 01 January 2026 00:57:50 +0000 (0:00:01.162) 0:05:22.961 ****** 2026-01-01 00:59:50.665765 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.665770 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.665775 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.665779 | orchestrator | 2026-01-01 00:59:50.665784 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-01 00:59:50.665789 | orchestrator | Thursday 01 January 2026 00:57:52 +0000 (0:00:01.592) 0:05:24.553 ****** 2026-01-01 00:59:50.665794 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.665799 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.665804 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.665809 | orchestrator | 2026-01-01 00:59:50.665814 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-01 00:59:50.665818 | orchestrator | Thursday 01 January 2026 00:57:54 +0000 (0:00:02.709) 0:05:27.263 ****** 2026-01-01 00:59:50.665823 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.665828 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.665833 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.665838 | orchestrator | 2026-01-01 00:59:50.665843 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-01 00:59:50.665848 | orchestrator | Thursday 01 January 2026 00:57:58 +0000 (0:00:03.507) 0:05:30.771 ****** 2026-01-01 00:59:50.665853 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.665858 | orchestrator | 2026-01-01 00:59:50.665863 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-01 00:59:50.665868 | orchestrator | Thursday 01 January 2026 00:58:00 +0000 (0:00:01.656) 0:05:32.428 ****** 2026-01-01 00:59:50.665873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-01 00:59:50.665882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:59:50.665905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.665911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.665917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.665922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-01 00:59:50.665931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:59:50.665937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.665957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.665963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.665968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-01 00:59:50.665974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:59:50.665982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.665987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.665994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.665999 | orchestrator | 2026-01-01 00:59:50.666045 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-01 00:59:50.666053 | orchestrator | Thursday 01 January 2026 00:58:03 +0000 (0:00:03.774) 0:05:36.202 ****** 2026-01-01 00:59:50.666059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-01 00:59:50.666064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:59:50.666069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.666077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.666082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.666088 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.666110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-01 00:59:50.666117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:59:50.666122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.666127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.666135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.666141 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.666146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-01 00:59:50.666167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:59:50.666173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.666178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:59:50.666186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:59:50.666191 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.666197 | orchestrator | 2026-01-01 00:59:50.666202 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-01 00:59:50.666207 | orchestrator | Thursday 01 January 2026 00:58:04 +0000 (0:00:01.071) 0:05:37.274 ****** 2026-01-01 00:59:50.666212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:59:50.666218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:59:50.666224 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.666229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:59:50.666234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:59:50.666240 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.666245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:59:50.666250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:59:50.666255 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.666260 | orchestrator | 2026-01-01 00:59:50.666265 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-01 00:59:50.666270 | orchestrator | Thursday 01 January 2026 00:58:06 +0000 (0:00:01.359) 0:05:38.633 ****** 2026-01-01 00:59:50.666275 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.666280 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.666285 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.666289 | orchestrator | 2026-01-01 00:59:50.666294 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-01 00:59:50.666301 | orchestrator | Thursday 01 January 2026 00:58:07 +0000 (0:00:01.329) 0:05:39.963 ****** 2026-01-01 00:59:50.666320 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.666325 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.666331 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.666336 | orchestrator | 2026-01-01 00:59:50.666341 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-01 00:59:50.666346 | orchestrator | Thursday 01 January 2026 00:58:09 +0000 (0:00:02.262) 0:05:42.225 ****** 2026-01-01 00:59:50.666351 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.666356 | orchestrator | 2026-01-01 00:59:50.666361 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-01 00:59:50.666370 | orchestrator | Thursday 01 January 2026 00:58:11 +0000 (0:00:01.714) 0:05:43.939 ****** 2026-01-01 00:59:50.666376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.666382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.666388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.666422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 00:59:50.666430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 00:59:50.666439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 00:59:50.666444 | orchestrator | 2026-01-01 00:59:50.666449 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-01 00:59:50.666454 | orchestrator | Thursday 01 January 2026 00:58:16 +0000 (0:00:05.346) 0:05:49.286 ****** 2026-01-01 00:59:50.666460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.666480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 00:59:50.666490 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.666495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.666501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 00:59:50.666507 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.666512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.666550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 00:59:50.666560 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.666565 | orchestrator | 2026-01-01 00:59:50.666570 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-01 00:59:50.666575 | orchestrator | Thursday 01 January 2026 00:58:17 +0000 (0:00:00.631) 0:05:49.918 ****** 2026-01-01 00:59:50.666580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.666585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-01 00:59:50.666590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-01 00:59:50.666595 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.666600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.666606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-01 00:59:50.666611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-01 00:59:50.666616 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.666621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.666626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-01 00:59:50.666631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-01 00:59:50.666639 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.666644 | orchestrator | 2026-01-01 00:59:50.666649 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-01 00:59:50.666653 | orchestrator | Thursday 01 January 2026 00:58:19 +0000 (0:00:01.649) 0:05:51.568 ****** 2026-01-01 00:59:50.666658 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.666663 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.666668 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.666673 | orchestrator | 2026-01-01 00:59:50.666677 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-01 00:59:50.666685 | orchestrator | Thursday 01 January 2026 00:58:19 +0000 (0:00:00.452) 0:05:52.021 ****** 2026-01-01 00:59:50.666703 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.666709 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.666714 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.666719 | orchestrator | 2026-01-01 00:59:50.666724 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-01 00:59:50.666728 | orchestrator | Thursday 01 January 2026 00:58:21 +0000 (0:00:01.439) 0:05:53.460 ****** 2026-01-01 00:59:50.666733 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.666738 | orchestrator | 2026-01-01 00:59:50.666743 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-01 00:59:50.666748 | orchestrator | Thursday 01 January 2026 00:58:23 +0000 (0:00:01.869) 0:05:55.330 ****** 2026-01-01 00:59:50.666753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-01 00:59:50.666759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:59:50.666765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.666800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-01 00:59:50.666807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-01 00:59:50.666812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:59:50.666818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:59:50.666826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.666864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.666874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.666883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-01 00:59:50.666904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.666921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.666926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 00:59:50.666952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-01 00:59:50.666959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-01 00:59:50.666964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.666988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.666995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.667000 | orchestrator | 2026-01-01 00:59:50.667019 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-01 00:59:50.667024 | orchestrator | Thursday 01 January 2026 00:58:27 +0000 (0:00:04.553) 0:05:59.883 ****** 2026-01-01 00:59:50.667030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-01 00:59:50.667036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:59:50.667041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.667075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.667082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-01 00:59:50.667087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-01 00:59:50.667107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.667116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:59:50.667122 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.667127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.667146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.667154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-01 00:59:50.667164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-01 00:59:50.667170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:59:50.667175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.667203 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.667213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.667219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 00:59:50.667252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-01 00:59:50.667259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:59:50.667269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:59:50.667274 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.667279 | orchestrator | 2026-01-01 00:59:50.667284 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-01 00:59:50.667291 | orchestrator | Thursday 01 January 2026 00:58:28 +0000 (0:00:00.855) 0:06:00.739 ****** 2026-01-01 00:59:50.667299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-01 00:59:50.667305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-01 00:59:50.667311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.667319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.667325 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.667330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-01 00:59:50.667335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-01 00:59:50.667340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.667345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.667350 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.667356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-01 00:59:50.667361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-01 00:59:50.667366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.667376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-01 00:59:50.667381 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.667386 | orchestrator | 2026-01-01 00:59:50.667391 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-01 00:59:50.667396 | orchestrator | Thursday 01 January 2026 00:58:29 +0000 (0:00:01.089) 0:06:01.828 ****** 2026-01-01 00:59:50.667406 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.667411 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.667416 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.667421 | orchestrator | 2026-01-01 00:59:50.667426 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-01 00:59:50.667431 | orchestrator | Thursday 01 January 2026 00:58:30 +0000 (0:00:00.917) 0:06:02.746 ****** 2026-01-01 00:59:50.667436 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.667441 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.667445 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.667450 | orchestrator | 2026-01-01 00:59:50.667455 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-01 00:59:50.667460 | orchestrator | Thursday 01 January 2026 00:58:32 +0000 (0:00:01.743) 0:06:04.489 ****** 2026-01-01 00:59:50.667465 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.667470 | orchestrator | 2026-01-01 00:59:50.667475 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-01 00:59:50.667480 | orchestrator | Thursday 01 January 2026 00:58:33 +0000 (0:00:01.608) 0:06:06.098 ****** 2026-01-01 00:59:50.667485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:59:50.667491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:59:50.667501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:59:50.667510 | orchestrator | 2026-01-01 00:59:50.667539 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-01 00:59:50.667548 | orchestrator | Thursday 01 January 2026 00:58:36 +0000 (0:00:02.643) 0:06:08.742 ****** 2026-01-01 00:59:50.667556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:59:50.667565 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.667573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:59:50.667580 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.667588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:59:50.667596 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.667604 | orchestrator | 2026-01-01 00:59:50.667611 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-01 00:59:50.667619 | orchestrator | Thursday 01 January 2026 00:58:37 +0000 (0:00:00.815) 0:06:09.557 ****** 2026-01-01 00:59:50.667627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-01 00:59:50.667640 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.667647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-01 00:59:50.667652 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.667660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-01 00:59:50.667665 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.667670 | orchestrator | 2026-01-01 00:59:50.667678 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-01 00:59:50.667683 | orchestrator | Thursday 01 January 2026 00:58:37 +0000 (0:00:00.701) 0:06:10.259 ****** 2026-01-01 00:59:50.667688 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.667693 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.667698 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.667703 | orchestrator | 2026-01-01 00:59:50.667708 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-01 00:59:50.667713 | orchestrator | Thursday 01 January 2026 00:58:38 +0000 (0:00:00.440) 0:06:10.699 ****** 2026-01-01 00:59:50.667718 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.667722 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.667727 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.667732 | orchestrator | 2026-01-01 00:59:50.667737 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-01 00:59:50.667742 | orchestrator | Thursday 01 January 2026 00:58:40 +0000 (0:00:01.654) 0:06:12.353 ****** 2026-01-01 00:59:50.667747 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.667752 | orchestrator | 2026-01-01 00:59:50.667757 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-01 00:59:50.667762 | orchestrator | Thursday 01 January 2026 00:58:42 +0000 (0:00:02.092) 0:06:14.446 ****** 2026-01-01 00:59:50.667767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-01 00:59:50.667773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-01 00:59:50.667785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-01 00:59:50.667794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-01 00:59:50.667800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-01 00:59:50.667805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-01 00:59:50.667814 | orchestrator | 2026-01-01 00:59:50.667819 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-01 00:59:50.667824 | orchestrator | Thursday 01 January 2026 00:58:49 +0000 (0:00:06.874) 0:06:21.321 ****** 2026-01-01 00:59:50.667834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-01 00:59:50.667840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-01 00:59:50.667846 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.667851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-01 00:59:50.667856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-01 00:59:50.667865 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.667873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-01 00:59:50.667913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-01 00:59:50.667925 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.667930 | orchestrator | 2026-01-01 00:59:50.667935 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-01 00:59:50.667940 | orchestrator | Thursday 01 January 2026 00:58:50 +0000 (0:00:01.292) 0:06:22.613 ****** 2026-01-01 00:59:50.667946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-01 00:59:50.667951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-01 00:59:50.667956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.667961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.667970 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.667975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-01 00:59:50.667980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-01 00:59:50.667985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.667990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.667996 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.668001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-01 00:59:50.668012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-01 00:59:50.668017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.668022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-01 00:59:50.668028 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.668033 | orchestrator | 2026-01-01 00:59:50.668037 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-01 00:59:50.668042 | orchestrator | Thursday 01 January 2026 00:58:51 +0000 (0:00:01.502) 0:06:24.116 ****** 2026-01-01 00:59:50.668047 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.668052 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.668057 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.668062 | orchestrator | 2026-01-01 00:59:50.668067 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-01 00:59:50.668072 | orchestrator | Thursday 01 January 2026 00:58:53 +0000 (0:00:01.407) 0:06:25.524 ****** 2026-01-01 00:59:50.668077 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.668081 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.668086 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.668091 | orchestrator | 2026-01-01 00:59:50.668096 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-01 00:59:50.668101 | orchestrator | Thursday 01 January 2026 00:58:55 +0000 (0:00:02.404) 0:06:27.929 ****** 2026-01-01 00:59:50.668106 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.668114 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.668119 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.668124 | orchestrator | 2026-01-01 00:59:50.668128 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-01 00:59:50.668133 | orchestrator | Thursday 01 January 2026 00:58:56 +0000 (0:00:00.372) 0:06:28.301 ****** 2026-01-01 00:59:50.668138 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.668143 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.668149 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.668154 | orchestrator | 2026-01-01 00:59:50.668159 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-01 00:59:50.668164 | orchestrator | Thursday 01 January 2026 00:58:56 +0000 (0:00:00.692) 0:06:28.994 ****** 2026-01-01 00:59:50.668169 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.668174 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.668180 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.668185 | orchestrator | 2026-01-01 00:59:50.668190 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-01 00:59:50.668195 | orchestrator | Thursday 01 January 2026 00:58:57 +0000 (0:00:00.373) 0:06:29.368 ****** 2026-01-01 00:59:50.668200 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.668205 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.668211 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.668216 | orchestrator | 2026-01-01 00:59:50.668221 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-01 00:59:50.668226 | orchestrator | Thursday 01 January 2026 00:58:57 +0000 (0:00:00.336) 0:06:29.704 ****** 2026-01-01 00:59:50.668231 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.668236 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.668242 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.668247 | orchestrator | 2026-01-01 00:59:50.668252 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-01-01 00:59:50.668257 | orchestrator | Thursday 01 January 2026 00:58:57 +0000 (0:00:00.321) 0:06:30.026 ****** 2026-01-01 00:59:50.668262 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:50.668267 | orchestrator | 2026-01-01 00:59:50.668273 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-01 00:59:50.668278 | orchestrator | Thursday 01 January 2026 00:58:59 +0000 (0:00:01.875) 0:06:31.901 ****** 2026-01-01 00:59:50.668283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.668295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.668301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-01 00:59:50.668312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.668317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.668323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:59:50.668329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.668334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.668344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:59:50.668355 | orchestrator | 2026-01-01 00:59:50.668360 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-01 00:59:50.668366 | orchestrator | Thursday 01 January 2026 00:59:02 +0000 (0:00:02.698) 0:06:34.599 ****** 2026-01-01 00:59:50.668371 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 00:59:50.668376 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:59:50.668381 | orchestrator | } 2026-01-01 00:59:50.668386 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 00:59:50.668391 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:59:50.668397 | orchestrator | } 2026-01-01 00:59:50.668402 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 00:59:50.668407 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 00:59:50.668412 | orchestrator | } 2026-01-01 00:59:50.668417 | orchestrator | 2026-01-01 00:59:50.668422 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 00:59:50.668427 | orchestrator | Thursday 01 January 2026 00:59:02 +0000 (0:00:00.356) 0:06:34.956 ****** 2026-01-01 00:59:50.668433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.668439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.668444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.668449 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.668455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.668460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.668475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.668481 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.668487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:59:50.668492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:59:50.668498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:59:50.668503 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.668508 | orchestrator | 2026-01-01 00:59:50.668513 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-01 00:59:50.668537 | orchestrator | Thursday 01 January 2026 00:59:04 +0000 (0:00:01.900) 0:06:36.856 ****** 2026-01-01 00:59:50.668542 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.668548 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.668553 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.668558 | orchestrator | 2026-01-01 00:59:50.668563 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-01 00:59:50.668569 | orchestrator | Thursday 01 January 2026 00:59:05 +0000 (0:00:01.193) 0:06:38.050 ****** 2026-01-01 00:59:50.668574 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.668579 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.668584 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.668589 | orchestrator | 2026-01-01 00:59:50.668594 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-01 00:59:50.668600 | orchestrator | Thursday 01 January 2026 00:59:06 +0000 (0:00:00.390) 0:06:38.441 ****** 2026-01-01 00:59:50.668605 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.668610 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.668615 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.668620 | orchestrator | 2026-01-01 00:59:50.668629 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-01 00:59:50.668635 | orchestrator | Thursday 01 January 2026 00:59:07 +0000 (0:00:01.074) 0:06:39.515 ****** 2026-01-01 00:59:50.668640 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.668645 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.668650 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.668655 | orchestrator | 2026-01-01 00:59:50.668661 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-01 00:59:50.668666 | orchestrator | Thursday 01 January 2026 00:59:08 +0000 (0:00:01.018) 0:06:40.534 ****** 2026-01-01 00:59:50.668671 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.668676 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.668681 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.668687 | orchestrator | 2026-01-01 00:59:50.668692 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-01 00:59:50.668697 | orchestrator | Thursday 01 January 2026 00:59:09 +0000 (0:00:01.445) 0:06:41.980 ****** 2026-01-01 00:59:50.668702 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.668707 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.668713 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.668718 | orchestrator | 2026-01-01 00:59:50.668723 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-01 00:59:50.668728 | orchestrator | Thursday 01 January 2026 00:59:14 +0000 (0:00:05.094) 0:06:47.075 ****** 2026-01-01 00:59:50.668733 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.668739 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.668746 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.668752 | orchestrator | 2026-01-01 00:59:50.668757 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-01 00:59:50.668765 | orchestrator | Thursday 01 January 2026 00:59:17 +0000 (0:00:02.856) 0:06:49.931 ****** 2026-01-01 00:59:50.668770 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.668775 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.668781 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.668786 | orchestrator | 2026-01-01 00:59:50.668791 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-01 00:59:50.668796 | orchestrator | Thursday 01 January 2026 00:59:31 +0000 (0:00:14.236) 0:07:04.167 ****** 2026-01-01 00:59:50.668802 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.668807 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.668812 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.668817 | orchestrator | 2026-01-01 00:59:50.668822 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-01 00:59:50.668828 | orchestrator | Thursday 01 January 2026 00:59:33 +0000 (0:00:01.678) 0:07:05.846 ****** 2026-01-01 00:59:50.668833 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:50.668838 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:50.668843 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:50.668848 | orchestrator | 2026-01-01 00:59:50.668853 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-01 00:59:50.668859 | orchestrator | Thursday 01 January 2026 00:59:38 +0000 (0:00:04.680) 0:07:10.526 ****** 2026-01-01 00:59:50.668864 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.668869 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.668874 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.668879 | orchestrator | 2026-01-01 00:59:50.668885 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-01 00:59:50.668890 | orchestrator | Thursday 01 January 2026 00:59:38 +0000 (0:00:00.359) 0:07:10.885 ****** 2026-01-01 00:59:50.668895 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.668900 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.668905 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.668911 | orchestrator | 2026-01-01 00:59:50.668916 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-01 00:59:50.668925 | orchestrator | Thursday 01 January 2026 00:59:38 +0000 (0:00:00.374) 0:07:11.259 ****** 2026-01-01 00:59:50.668930 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.668935 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.668940 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.668946 | orchestrator | 2026-01-01 00:59:50.668951 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-01 00:59:50.668956 | orchestrator | Thursday 01 January 2026 00:59:39 +0000 (0:00:00.715) 0:07:11.974 ****** 2026-01-01 00:59:50.668961 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.668966 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.668972 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.668977 | orchestrator | 2026-01-01 00:59:50.668982 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-01 00:59:50.668987 | orchestrator | Thursday 01 January 2026 00:59:40 +0000 (0:00:00.408) 0:07:12.383 ****** 2026-01-01 00:59:50.668992 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.668997 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.669003 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.669008 | orchestrator | 2026-01-01 00:59:50.669013 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-01 00:59:50.669018 | orchestrator | Thursday 01 January 2026 00:59:40 +0000 (0:00:00.364) 0:07:12.747 ****** 2026-01-01 00:59:50.669024 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:50.669029 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:50.669034 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:50.669039 | orchestrator | 2026-01-01 00:59:50.669045 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-01 00:59:50.669050 | orchestrator | Thursday 01 January 2026 00:59:40 +0000 (0:00:00.341) 0:07:13.088 ****** 2026-01-01 00:59:50.669055 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.669060 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.669065 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.669070 | orchestrator | 2026-01-01 00:59:50.669076 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-01 00:59:50.669081 | orchestrator | Thursday 01 January 2026 00:59:45 +0000 (0:00:05.161) 0:07:18.250 ****** 2026-01-01 00:59:50.669086 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:50.669091 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:50.669096 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:50.669101 | orchestrator | 2026-01-01 00:59:50.669107 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:59:50.669112 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-01 00:59:50.669118 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-01 00:59:50.669123 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-01 00:59:50.669128 | orchestrator | 2026-01-01 00:59:50.669134 | orchestrator | 2026-01-01 00:59:50.669139 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:59:50.669144 | orchestrator | Thursday 01 January 2026 00:59:46 +0000 (0:00:00.922) 0:07:19.172 ****** 2026-01-01 00:59:50.669149 | orchestrator | =============================================================================== 2026-01-01 00:59:50.669154 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.24s 2026-01-01 00:59:50.669159 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 7.26s 2026-01-01 00:59:50.669165 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.19s 2026-01-01 00:59:50.669173 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.87s 2026-01-01 00:59:50.669181 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.27s 2026-01-01 00:59:50.669205 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 6.16s 2026-01-01 00:59:50.669211 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.76s 2026-01-01 00:59:50.669216 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.39s 2026-01-01 00:59:50.669221 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.35s 2026-01-01 00:59:50.669226 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 5.28s 2026-01-01 00:59:50.669231 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.16s 2026-01-01 00:59:50.669237 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.09s 2026-01-01 00:59:50.669242 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.87s 2026-01-01 00:59:50.669247 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.81s 2026-01-01 00:59:50.669252 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.69s 2026-01-01 00:59:50.669257 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.68s 2026-01-01 00:59:50.669262 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.58s 2026-01-01 00:59:50.669267 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.55s 2026-01-01 00:59:50.669273 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.51s 2026-01-01 00:59:50.669278 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.43s 2026-01-01 00:59:50.669283 | orchestrator | 2026-01-01 00:59:50 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 00:59:50.669288 | orchestrator | 2026-01-01 00:59:50 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:50.669294 | orchestrator | 2026-01-01 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:53.697473 | orchestrator | 2026-01-01 00:59:53 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 00:59:53.699616 | orchestrator | 2026-01-01 00:59:53 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 00:59:53.702566 | orchestrator | 2026-01-01 00:59:53 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:53.703073 | orchestrator | 2026-01-01 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:56.746577 | orchestrator | 2026-01-01 00:59:56 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 00:59:56.748302 | orchestrator | 2026-01-01 00:59:56 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 00:59:56.751824 | orchestrator | 2026-01-01 00:59:56 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:56.752074 | orchestrator | 2026-01-01 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:59.806688 | orchestrator | 2026-01-01 00:59:59 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 00:59:59.808044 | orchestrator | 2026-01-01 00:59:59 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 00:59:59.810353 | orchestrator | 2026-01-01 00:59:59 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 00:59:59.810438 | orchestrator | 2026-01-01 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:02.848503 | orchestrator | 2026-01-01 01:00:02 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:02.849397 | orchestrator | 2026-01-01 01:00:02 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:02.851795 | orchestrator | 2026-01-01 01:00:02 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:02.851845 | orchestrator | 2026-01-01 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:05.886610 | orchestrator | 2026-01-01 01:00:05 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:05.887668 | orchestrator | 2026-01-01 01:00:05 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:05.888788 | orchestrator | 2026-01-01 01:00:05 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:05.888815 | orchestrator | 2026-01-01 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:08.924454 | orchestrator | 2026-01-01 01:00:08 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:08.924656 | orchestrator | 2026-01-01 01:00:08 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:08.926331 | orchestrator | 2026-01-01 01:00:08 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:08.926361 | orchestrator | 2026-01-01 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:11.960904 | orchestrator | 2026-01-01 01:00:11 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:11.961859 | orchestrator | 2026-01-01 01:00:11 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:11.963357 | orchestrator | 2026-01-01 01:00:11 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:11.963414 | orchestrator | 2026-01-01 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:15.009647 | orchestrator | 2026-01-01 01:00:15 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:15.012317 | orchestrator | 2026-01-01 01:00:15 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:15.013203 | orchestrator | 2026-01-01 01:00:15 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:15.013861 | orchestrator | 2026-01-01 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:18.057742 | orchestrator | 2026-01-01 01:00:18 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:18.058283 | orchestrator | 2026-01-01 01:00:18 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:18.059372 | orchestrator | 2026-01-01 01:00:18 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:18.059400 | orchestrator | 2026-01-01 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:21.093765 | orchestrator | 2026-01-01 01:00:21 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:21.096275 | orchestrator | 2026-01-01 01:00:21 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:21.097097 | orchestrator | 2026-01-01 01:00:21 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:21.097133 | orchestrator | 2026-01-01 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:24.124762 | orchestrator | 2026-01-01 01:00:24 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:24.126316 | orchestrator | 2026-01-01 01:00:24 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:24.127154 | orchestrator | 2026-01-01 01:00:24 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:24.127193 | orchestrator | 2026-01-01 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:27.179113 | orchestrator | 2026-01-01 01:00:27 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:27.180052 | orchestrator | 2026-01-01 01:00:27 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:27.185613 | orchestrator | 2026-01-01 01:00:27 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:27.185651 | orchestrator | 2026-01-01 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:30.240125 | orchestrator | 2026-01-01 01:00:30 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:30.240795 | orchestrator | 2026-01-01 01:00:30 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:30.241898 | orchestrator | 2026-01-01 01:00:30 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:30.242106 | orchestrator | 2026-01-01 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:33.285979 | orchestrator | 2026-01-01 01:00:33 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:33.288607 | orchestrator | 2026-01-01 01:00:33 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:33.292539 | orchestrator | 2026-01-01 01:00:33 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:33.292606 | orchestrator | 2026-01-01 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:36.337313 | orchestrator | 2026-01-01 01:00:36 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:36.338402 | orchestrator | 2026-01-01 01:00:36 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:36.339882 | orchestrator | 2026-01-01 01:00:36 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:36.339922 | orchestrator | 2026-01-01 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:39.386683 | orchestrator | 2026-01-01 01:00:39 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:39.386798 | orchestrator | 2026-01-01 01:00:39 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:39.389647 | orchestrator | 2026-01-01 01:00:39 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:39.389682 | orchestrator | 2026-01-01 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:42.427880 | orchestrator | 2026-01-01 01:00:42 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:42.428950 | orchestrator | 2026-01-01 01:00:42 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:42.431630 | orchestrator | 2026-01-01 01:00:42 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:42.431673 | orchestrator | 2026-01-01 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:45.476879 | orchestrator | 2026-01-01 01:00:45 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:45.478372 | orchestrator | 2026-01-01 01:00:45 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:45.479831 | orchestrator | 2026-01-01 01:00:45 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:45.479874 | orchestrator | 2026-01-01 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:48.525575 | orchestrator | 2026-01-01 01:00:48 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:48.527571 | orchestrator | 2026-01-01 01:00:48 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:48.530317 | orchestrator | 2026-01-01 01:00:48 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:48.530466 | orchestrator | 2026-01-01 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:51.577238 | orchestrator | 2026-01-01 01:00:51 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:51.579233 | orchestrator | 2026-01-01 01:00:51 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:51.580793 | orchestrator | 2026-01-01 01:00:51 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:51.580931 | orchestrator | 2026-01-01 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:54.631345 | orchestrator | 2026-01-01 01:00:54 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:54.633751 | orchestrator | 2026-01-01 01:00:54 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:54.636151 | orchestrator | 2026-01-01 01:00:54 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:54.636205 | orchestrator | 2026-01-01 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:57.677288 | orchestrator | 2026-01-01 01:00:57 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:00:57.677675 | orchestrator | 2026-01-01 01:00:57 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:00:57.678226 | orchestrator | 2026-01-01 01:00:57 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:00:57.678252 | orchestrator | 2026-01-01 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:00.726384 | orchestrator | 2026-01-01 01:01:00 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:00.726804 | orchestrator | 2026-01-01 01:01:00 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:00.730603 | orchestrator | 2026-01-01 01:01:00 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:00.730684 | orchestrator | 2026-01-01 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:03.781852 | orchestrator | 2026-01-01 01:01:03 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:03.784353 | orchestrator | 2026-01-01 01:01:03 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:03.786790 | orchestrator | 2026-01-01 01:01:03 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:03.787575 | orchestrator | 2026-01-01 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:06.827897 | orchestrator | 2026-01-01 01:01:06 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:06.829427 | orchestrator | 2026-01-01 01:01:06 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:06.831126 | orchestrator | 2026-01-01 01:01:06 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:06.831352 | orchestrator | 2026-01-01 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:09.883190 | orchestrator | 2026-01-01 01:01:09 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:09.885323 | orchestrator | 2026-01-01 01:01:09 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:09.887341 | orchestrator | 2026-01-01 01:01:09 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:09.887391 | orchestrator | 2026-01-01 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:12.935952 | orchestrator | 2026-01-01 01:01:12 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:12.936999 | orchestrator | 2026-01-01 01:01:12 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:12.938455 | orchestrator | 2026-01-01 01:01:12 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:12.938544 | orchestrator | 2026-01-01 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:15.980515 | orchestrator | 2026-01-01 01:01:15 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:15.980620 | orchestrator | 2026-01-01 01:01:15 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:15.981908 | orchestrator | 2026-01-01 01:01:15 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:15.981933 | orchestrator | 2026-01-01 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:19.041224 | orchestrator | 2026-01-01 01:01:19 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:19.042520 | orchestrator | 2026-01-01 01:01:19 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:19.043661 | orchestrator | 2026-01-01 01:01:19 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:19.043687 | orchestrator | 2026-01-01 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:22.100404 | orchestrator | 2026-01-01 01:01:22 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:22.102713 | orchestrator | 2026-01-01 01:01:22 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:22.104942 | orchestrator | 2026-01-01 01:01:22 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:22.104970 | orchestrator | 2026-01-01 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:25.154960 | orchestrator | 2026-01-01 01:01:25 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:25.156636 | orchestrator | 2026-01-01 01:01:25 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:25.158409 | orchestrator | 2026-01-01 01:01:25 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:25.158625 | orchestrator | 2026-01-01 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:28.209649 | orchestrator | 2026-01-01 01:01:28 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:28.211003 | orchestrator | 2026-01-01 01:01:28 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:28.212203 | orchestrator | 2026-01-01 01:01:28 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:28.212228 | orchestrator | 2026-01-01 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:31.266397 | orchestrator | 2026-01-01 01:01:31 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:31.268716 | orchestrator | 2026-01-01 01:01:31 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:31.270179 | orchestrator | 2026-01-01 01:01:31 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:31.270633 | orchestrator | 2026-01-01 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:34.326237 | orchestrator | 2026-01-01 01:01:34 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:34.329564 | orchestrator | 2026-01-01 01:01:34 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:34.332361 | orchestrator | 2026-01-01 01:01:34 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:34.332430 | orchestrator | 2026-01-01 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:37.383291 | orchestrator | 2026-01-01 01:01:37 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:37.385024 | orchestrator | 2026-01-01 01:01:37 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:37.387603 | orchestrator | 2026-01-01 01:01:37 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:37.387736 | orchestrator | 2026-01-01 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:40.439776 | orchestrator | 2026-01-01 01:01:40 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:40.441706 | orchestrator | 2026-01-01 01:01:40 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:40.444544 | orchestrator | 2026-01-01 01:01:40 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:40.444614 | orchestrator | 2026-01-01 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:43.493183 | orchestrator | 2026-01-01 01:01:43 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:43.495181 | orchestrator | 2026-01-01 01:01:43 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:43.497917 | orchestrator | 2026-01-01 01:01:43 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state STARTED 2026-01-01 01:01:43.497985 | orchestrator | 2026-01-01 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:46.560713 | orchestrator | 2026-01-01 01:01:46 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:01:46.561564 | orchestrator | 2026-01-01 01:01:46 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:46.564828 | orchestrator | 2026-01-01 01:01:46 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:46.571424 | orchestrator | 2026-01-01 01:01:46 | INFO  | Task 00391569-f586-4afd-b44b-a2d41ca8e389 is in state SUCCESS 2026-01-01 01:01:46.573906 | orchestrator | 2026-01-01 01:01:46.573952 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-01 01:01:46.573965 | orchestrator | 2.16.14 2026-01-01 01:01:46.573978 | orchestrator | 2026-01-01 01:01:46.573990 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-01 01:01:46.574002 | orchestrator | 2026-01-01 01:01:46.574066 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-01 01:01:46.574082 | orchestrator | Thursday 01 January 2026 00:49:28 +0000 (0:00:00.799) 0:00:00.799 ****** 2026-01-01 01:01:46.574094 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.574139 | orchestrator | 2026-01-01 01:01:46.574166 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-01 01:01:46.574188 | orchestrator | Thursday 01 January 2026 00:49:29 +0000 (0:00:01.100) 0:00:01.900 ****** 2026-01-01 01:01:46.574206 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.574225 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.574244 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.574264 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.574275 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.574286 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.574378 | orchestrator | 2026-01-01 01:01:46.574392 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-01 01:01:46.574403 | orchestrator | Thursday 01 January 2026 00:49:31 +0000 (0:00:01.886) 0:00:03.787 ****** 2026-01-01 01:01:46.574414 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.574425 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.574473 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.574485 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.574545 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.574559 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.574572 | orchestrator | 2026-01-01 01:01:46.574585 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-01 01:01:46.574598 | orchestrator | Thursday 01 January 2026 00:49:32 +0000 (0:00:01.062) 0:00:04.849 ****** 2026-01-01 01:01:46.574611 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.574624 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.574653 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.574667 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.574680 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.574692 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.574705 | orchestrator | 2026-01-01 01:01:46.574719 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-01 01:01:46.574732 | orchestrator | Thursday 01 January 2026 00:49:33 +0000 (0:00:01.305) 0:00:06.154 ****** 2026-01-01 01:01:46.574745 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.574759 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.574772 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.574786 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.574798 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.574812 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.574824 | orchestrator | 2026-01-01 01:01:46.574838 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-01 01:01:46.574852 | orchestrator | Thursday 01 January 2026 00:49:34 +0000 (0:00:01.032) 0:00:07.186 ****** 2026-01-01 01:01:46.574864 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.574875 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.574885 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.574896 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.574907 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.574955 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.574967 | orchestrator | 2026-01-01 01:01:46.574978 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-01 01:01:46.574989 | orchestrator | Thursday 01 January 2026 00:49:35 +0000 (0:00:01.098) 0:00:08.285 ****** 2026-01-01 01:01:46.574999 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.575010 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.575021 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.575031 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.575042 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.575053 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.575063 | orchestrator | 2026-01-01 01:01:46.575074 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-01 01:01:46.575089 | orchestrator | Thursday 01 January 2026 00:49:37 +0000 (0:00:01.468) 0:00:09.754 ****** 2026-01-01 01:01:46.575107 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.575147 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.575170 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.575188 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.575205 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.575225 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.575245 | orchestrator | 2026-01-01 01:01:46.575257 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-01 01:01:46.575268 | orchestrator | Thursday 01 January 2026 00:49:38 +0000 (0:00:01.215) 0:00:10.969 ****** 2026-01-01 01:01:46.575279 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.575290 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.575300 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.575311 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.575322 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.575333 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.575343 | orchestrator | 2026-01-01 01:01:46.575354 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-01 01:01:46.575365 | orchestrator | Thursday 01 January 2026 00:49:39 +0000 (0:00:01.213) 0:00:12.183 ****** 2026-01-01 01:01:46.575376 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 01:01:46.575387 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 01:01:46.575398 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 01:01:46.575409 | orchestrator | 2026-01-01 01:01:46.575420 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-01 01:01:46.575431 | orchestrator | Thursday 01 January 2026 00:49:40 +0000 (0:00:00.934) 0:00:13.117 ****** 2026-01-01 01:01:46.575512 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.575524 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.575535 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.575563 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.575574 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.575585 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.575596 | orchestrator | 2026-01-01 01:01:46.575615 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-01 01:01:46.575634 | orchestrator | Thursday 01 January 2026 00:49:42 +0000 (0:00:01.946) 0:00:15.064 ****** 2026-01-01 01:01:46.575653 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 01:01:46.575672 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 01:01:46.575692 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 01:01:46.575710 | orchestrator | 2026-01-01 01:01:46.575728 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-01 01:01:46.575745 | orchestrator | Thursday 01 January 2026 00:49:46 +0000 (0:00:04.335) 0:00:19.400 ****** 2026-01-01 01:01:46.575763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-01 01:01:46.575781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-01 01:01:46.575798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-01 01:01:46.575817 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.575836 | orchestrator | 2026-01-01 01:01:46.575854 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-01 01:01:46.575871 | orchestrator | Thursday 01 January 2026 00:49:48 +0000 (0:00:01.357) 0:00:20.758 ****** 2026-01-01 01:01:46.575888 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.575918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.575948 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.575965 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.575984 | orchestrator | 2026-01-01 01:01:46.576003 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-01 01:01:46.576021 | orchestrator | Thursday 01 January 2026 00:49:48 +0000 (0:00:00.611) 0:00:21.369 ****** 2026-01-01 01:01:46.576042 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.576064 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.576081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.576101 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.576119 | orchestrator | 2026-01-01 01:01:46.576137 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-01 01:01:46.576156 | orchestrator | Thursday 01 January 2026 00:49:49 +0000 (0:00:00.443) 0:00:21.813 ****** 2026-01-01 01:01:46.576184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-01 00:49:43.603284', 'end': '2026-01-01 00:49:43.917919', 'delta': '0:00:00.314635', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.576202 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-01 00:49:45.161588', 'end': '2026-01-01 00:49:45.528729', 'delta': '0:00:00.367141', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.576219 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-01 00:49:46.175389', 'end': '2026-01-01 00:49:46.508854', 'delta': '0:00:00.333465', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.576238 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.576248 | orchestrator | 2026-01-01 01:01:46.576258 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-01 01:01:46.576271 | orchestrator | Thursday 01 January 2026 00:49:49 +0000 (0:00:00.218) 0:00:22.031 ****** 2026-01-01 01:01:46.576288 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.576305 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.576321 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.576337 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.576353 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.576367 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.576381 | orchestrator | 2026-01-01 01:01:46.576399 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-01 01:01:46.576416 | orchestrator | Thursday 01 January 2026 00:49:51 +0000 (0:00:02.400) 0:00:24.431 ****** 2026-01-01 01:01:46.576461 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 01:01:46.576474 | orchestrator | 2026-01-01 01:01:46.576484 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-01 01:01:46.576494 | orchestrator | Thursday 01 January 2026 00:49:53 +0000 (0:00:01.303) 0:00:25.734 ****** 2026-01-01 01:01:46.576504 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.576514 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.576523 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.576533 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.576543 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.576552 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.576562 | orchestrator | 2026-01-01 01:01:46.576571 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-01 01:01:46.576581 | orchestrator | Thursday 01 January 2026 00:49:55 +0000 (0:00:02.511) 0:00:28.246 ****** 2026-01-01 01:01:46.576591 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.576601 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.576610 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.576620 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.576630 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.576639 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.576649 | orchestrator | 2026-01-01 01:01:46.576659 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-01 01:01:46.576669 | orchestrator | Thursday 01 January 2026 00:49:58 +0000 (0:00:02.507) 0:00:30.753 ****** 2026-01-01 01:01:46.576678 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.576688 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.576698 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.576708 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.576717 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.576727 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.576736 | orchestrator | 2026-01-01 01:01:46.576746 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-01 01:01:46.576756 | orchestrator | Thursday 01 January 2026 00:49:59 +0000 (0:00:01.472) 0:00:32.226 ****** 2026-01-01 01:01:46.576766 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.576776 | orchestrator | 2026-01-01 01:01:46.576786 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-01 01:01:46.576796 | orchestrator | Thursday 01 January 2026 00:49:59 +0000 (0:00:00.145) 0:00:32.371 ****** 2026-01-01 01:01:46.576872 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.576883 | orchestrator | 2026-01-01 01:01:46.576892 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-01 01:01:46.576902 | orchestrator | Thursday 01 January 2026 00:50:00 +0000 (0:00:00.275) 0:00:32.647 ****** 2026-01-01 01:01:46.576912 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.576922 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.576932 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.576952 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.576962 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.576972 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.576981 | orchestrator | 2026-01-01 01:01:46.576991 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-01 01:01:46.577001 | orchestrator | Thursday 01 January 2026 00:50:01 +0000 (0:00:01.835) 0:00:34.482 ****** 2026-01-01 01:01:46.577011 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.577021 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.577031 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.577040 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.577057 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.577073 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.577090 | orchestrator | 2026-01-01 01:01:46.577106 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-01 01:01:46.577124 | orchestrator | Thursday 01 January 2026 00:50:03 +0000 (0:00:01.893) 0:00:36.376 ****** 2026-01-01 01:01:46.577141 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.577159 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.577171 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.577180 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.577190 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.577200 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.577210 | orchestrator | 2026-01-01 01:01:46.577219 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-01 01:01:46.577229 | orchestrator | Thursday 01 January 2026 00:50:04 +0000 (0:00:00.823) 0:00:37.199 ****** 2026-01-01 01:01:46.577239 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.577249 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.577258 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.577268 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.577278 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.577287 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.577297 | orchestrator | 2026-01-01 01:01:46.577307 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-01 01:01:46.577316 | orchestrator | Thursday 01 January 2026 00:50:05 +0000 (0:00:01.141) 0:00:38.341 ****** 2026-01-01 01:01:46.577326 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.577351 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.577361 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.577371 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.577380 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.577390 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.577400 | orchestrator | 2026-01-01 01:01:46.577410 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-01 01:01:46.577420 | orchestrator | Thursday 01 January 2026 00:50:06 +0000 (0:00:00.785) 0:00:39.127 ****** 2026-01-01 01:01:46.577429 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.577472 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.577489 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.577504 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.577520 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.577536 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.577550 | orchestrator | 2026-01-01 01:01:46.577566 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-01 01:01:46.577595 | orchestrator | Thursday 01 January 2026 00:50:07 +0000 (0:00:00.793) 0:00:39.920 ****** 2026-01-01 01:01:46.577610 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.577626 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.577642 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.577658 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.577674 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.577690 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.577708 | orchestrator | 2026-01-01 01:01:46.577725 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-01 01:01:46.577743 | orchestrator | Thursday 01 January 2026 00:50:08 +0000 (0:00:01.452) 0:00:41.373 ****** 2026-01-01 01:01:46.577761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1d2d931--4931--5db0--ab2b--78163b321b95-osd--block--d1d2d931--4931--5db0--ab2b--78163b321b95', 'dm-uuid-LVM-FK1QBo4OXTjtHYPrIVww6qnrCtuOsh01Ob8BpWWHOHHVW6p4e66V0K211KeJvAdQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf-osd--block--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf', 'dm-uuid-LVM-UyL3VHnPIroTg0dkBDPlejOQABxBvTgUZuLAQXT4NvTA09QOFfW1CPBYa4onRlQo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb51feb3--1d35--54b5--8ae7--a768510de0c7-osd--block--fb51feb3--1d35--54b5--8ae7--a768510de0c7', 'dm-uuid-LVM-24DBmmmuabDFfKPEzAbcmMeHV1fRiLGa6bft9tf2dZWT3PVGce68pIphUi9ykJQQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef-osd--block--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef', 'dm-uuid-LVM-7M310GiLltp8Vnz2csjtNJlk1L0vv4Cmofoy1KrAV9BQUSRNMmV5gY2W3vALnew7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.577996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part1', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part14', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part15', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part16', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d1d2d931--4931--5db0--ab2b--78163b321b95-osd--block--d1d2d931--4931--5db0--ab2b--78163b321b95'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d8BQ8X-O0q2-8Dr6-D6Yo-uk3T-aAHA-v24QNd', 'scsi-0QEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb', 'scsi-SQEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf-osd--block--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4KnCTR-rOfZ-7W2z-L9Zo-9C3G-Tf5Y-xbkafe', 'scsi-0QEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515', 'scsi-SQEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf', 'scsi-SQEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4d57ac9--6a13--5c0f--8510--006bddd85eb9-osd--block--a4d57ac9--6a13--5c0f--8510--006bddd85eb9', 'dm-uuid-LVM-hcC0V33nfWhvavdx6bYTOrPejhCRsXT6McptdcU8XwLHX58F9r79eZ6RUgC13lY9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a96b51d--dfb4--5025--9237--a7295cf21147-osd--block--3a96b51d--dfb4--5025--9237--a7295cf21147', 'dm-uuid-LVM-rtlyMB2dCL1xBVCjxdFgUZQe34RzKl2yj8TH7muRKMAgiZ4tsixX4NUg0uUmJifx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e', 'scsi-SQEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fb51feb3--1d35--54b5--8ae7--a768510de0c7-osd--block--fb51feb3--1d35--54b5--8ae7--a768510de0c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NteD8s-bHnx-ob7m-Nrex-AzPQ-gcuO-xNLEZR', 'scsi-0QEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122', 'scsi-SQEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578895 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.578910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef-osd--block--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8blfqt-AMNX-egei-T1WZ-gYXs-o9ZA-RuO3Jz', 'scsi-0QEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86', 'scsi-SQEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3', 'scsi-SQEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.578961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a4d57ac9--6a13--5c0f--8510--006bddd85eb9-osd--block--a4d57ac9--6a13--5c0f--8510--006bddd85eb9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hYwwY6-iOtQ-zGma-oaom-t0s8-TBDp-jDC4Fj', 'scsi-0QEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab', 'scsi-SQEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.578999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3a96b51d--dfb4--5025--9237--a7295cf21147-osd--block--3a96b51d--dfb4--5025--9237--a7295cf21147'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Nxnuu3-Xqm4-ppZR-n7Hk-J8xe-yIYD-Rl9W16', 'scsi-0QEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c', 'scsi-SQEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.579014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f', 'scsi-SQEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.579079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.579118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30', 'scsi-SQEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part1', 'scsi-SQEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part14', 'scsi-SQEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part15', 'scsi-SQEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part16', 'scsi-SQEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.579242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.579258 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.579274 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.579291 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.579306 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.579319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:01:46.579432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65', 'scsi-SQEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part1', 'scsi-SQEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part14', 'scsi-SQEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part15', 'scsi-SQEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part16', 'scsi-SQEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.579493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:01:46.579504 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.579513 | orchestrator | 2026-01-01 01:01:46.579522 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-01 01:01:46.579532 | orchestrator | Thursday 01 January 2026 00:50:10 +0000 (0:00:01.875) 0:00:43.249 ****** 2026-01-01 01:01:46.579542 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1d2d931--4931--5db0--ab2b--78163b321b95-osd--block--d1d2d931--4931--5db0--ab2b--78163b321b95', 'dm-uuid-LVM-FK1QBo4OXTjtHYPrIVww6qnrCtuOsh01Ob8BpWWHOHHVW6p4e66V0K211KeJvAdQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579557 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf-osd--block--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf', 'dm-uuid-LVM-UyL3VHnPIroTg0dkBDPlejOQABxBvTgUZuLAQXT4NvTA09QOFfW1CPBYa4onRlQo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb51feb3--1d35--54b5--8ae7--a768510de0c7-osd--block--fb51feb3--1d35--54b5--8ae7--a768510de0c7', 'dm-uuid-LVM-24DBmmmuabDFfKPEzAbcmMeHV1fRiLGa6bft9tf2dZWT3PVGce68pIphUi9ykJQQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579576 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef-osd--block--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef', 'dm-uuid-LVM-7M310GiLltp8Vnz2csjtNJlk1L0vv4Cmofoy1KrAV9BQUSRNMmV5gY2W3vALnew7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579597 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579608 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579617 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579664 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579691 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579705 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579714 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579751 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579778 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579789 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579799 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fb51feb3--1d35--54b5--8ae7--a768510de0c7-osd--block--fb51feb3--1d35--54b5--8ae7--a768510de0c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NteD8s-bHnx-ob7m-Nrex-AzPQ-gcuO-xNLEZR', 'scsi-0QEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122', 'scsi-SQEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579842 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579852 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef-osd--block--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8blfqt-AMNX-egei-T1WZ-gYXs-o9ZA-RuO3Jz', 'scsi-0QEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86', 'scsi-SQEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579866 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3', 'scsi-SQEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part1', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part14', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part15', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part16', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579899 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d1d2d931--4931--5db0--ab2b--78163b321b95-osd--block--d1d2d931--4931--5db0--ab2b--78163b321b95'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d8BQ8X-O0q2-8Dr6-D6Yo-uk3T-aAHA-v24QNd', 'scsi-0QEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb', 'scsi-SQEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579913 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579922 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf-osd--block--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4KnCTR-rOfZ-7W2z-L9Zo-9C3G-Tf5Y-xbkafe', 'scsi-0QEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515', 'scsi-SQEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579938 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf', 'scsi-SQEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4d57ac9--6a13--5c0f--8510--006bddd85eb9-osd--block--a4d57ac9--6a13--5c0f--8510--006bddd85eb9', 'dm-uuid-LVM-hcC0V33nfWhvavdx6bYTOrPejhCRsXT6McptdcU8XwLHX58F9r79eZ6RUgC13lY9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579976 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a96b51d--dfb4--5025--9237--a7295cf21147-osd--block--3a96b51d--dfb4--5025--9237--a7295cf21147', 'dm-uuid-LVM-rtlyMB2dCL1xBVCjxdFgUZQe34RzKl2yj8TH7muRKMAgiZ4tsixX4NUg0uUmJifx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.579985 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580011 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580027 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580036 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.580045 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580062 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580072 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580081 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580096 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580105 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580139 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580156 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.580180 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580207 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580232 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a4d57ac9--6a13--5c0f--8510--006bddd85eb9-osd--block--a4d57ac9--6a13--5c0f--8510--006bddd85eb9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hYwwY6-iOtQ-zGma-oaom-t0s8-TBDp-jDC4Fj', 'scsi-0QEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab', 'scsi-SQEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580250 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3a96b51d--dfb4--5025--9237--a7295cf21147-osd--block--3a96b51d--dfb4--5025--9237--a7295cf21147'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Nxnuu3-Xqm4-ppZR-n7Hk-J8xe-yIYD-Rl9W16', 'scsi-0QEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c', 'scsi-SQEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580273 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f', 'scsi-SQEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580302 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580317 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580327 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580346 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580362 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580379 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580394 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580420 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580471 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580498 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580509 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580589 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580613 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580642 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30', 'scsi-SQEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part1', 'scsi-SQEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part14', 'scsi-SQEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part15', 'scsi-SQEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part16', 'scsi-SQEMU_QEMU_HARDDISK_47ffa559-76d7-406a-af3c-e43e80efec30-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580659 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580683 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e', 'scsi-SQEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b1096fbb-20c9-48ea-8453-55839ef2db2e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580709 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580735 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.580751 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.580766 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.580781 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580796 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580818 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580843 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580860 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580876 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580901 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580918 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580942 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65', 'scsi-SQEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part1', 'scsi-SQEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part14', 'scsi-SQEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part15', 'scsi-SQEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part16', 'scsi-SQEMU_QEMU_HARDDISK_27dcfef2-d19c-4c37-9c2e-324f30ce5c65-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580968 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:01:46.580985 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.581001 | orchestrator | 2026-01-01 01:01:46.581024 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-01 01:01:46.581041 | orchestrator | Thursday 01 January 2026 00:50:13 +0000 (0:00:02.772) 0:00:46.022 ****** 2026-01-01 01:01:46.581058 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.581074 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.581083 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.581092 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.581101 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.581109 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.581118 | orchestrator | 2026-01-01 01:01:46.581127 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-01 01:01:46.581136 | orchestrator | Thursday 01 January 2026 00:50:15 +0000 (0:00:02.589) 0:00:48.611 ****** 2026-01-01 01:01:46.581145 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.581153 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.581163 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.581172 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.581180 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.581196 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.581204 | orchestrator | 2026-01-01 01:01:46.581213 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-01 01:01:46.581222 | orchestrator | Thursday 01 January 2026 00:50:17 +0000 (0:00:01.603) 0:00:50.214 ****** 2026-01-01 01:01:46.581230 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.581239 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.581248 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.581256 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.581265 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.581274 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.581284 | orchestrator | 2026-01-01 01:01:46.581299 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-01 01:01:46.581314 | orchestrator | Thursday 01 January 2026 00:50:19 +0000 (0:00:01.475) 0:00:51.690 ****** 2026-01-01 01:01:46.581328 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.581343 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.581359 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.581375 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.581391 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.581405 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.581420 | orchestrator | 2026-01-01 01:01:46.581500 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-01 01:01:46.581514 | orchestrator | Thursday 01 January 2026 00:50:20 +0000 (0:00:01.381) 0:00:53.071 ****** 2026-01-01 01:01:46.581524 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.581533 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.581541 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.581550 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.581559 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.581567 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.581576 | orchestrator | 2026-01-01 01:01:46.581585 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-01 01:01:46.581594 | orchestrator | Thursday 01 January 2026 00:50:23 +0000 (0:00:02.726) 0:00:55.798 ****** 2026-01-01 01:01:46.581603 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.581612 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.581620 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.581629 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.581638 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.581647 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.581655 | orchestrator | 2026-01-01 01:01:46.581664 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-01 01:01:46.581673 | orchestrator | Thursday 01 January 2026 00:50:24 +0000 (0:00:01.801) 0:00:57.599 ****** 2026-01-01 01:01:46.581681 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-01 01:01:46.581690 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-01 01:01:46.581700 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-01 01:01:46.581708 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-01 01:01:46.581717 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-01 01:01:46.581726 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-01 01:01:46.581735 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-01 01:01:46.581743 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-01 01:01:46.581752 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-01 01:01:46.581760 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-01 01:01:46.581769 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-01 01:01:46.581777 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-01 01:01:46.581786 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-01 01:01:46.581795 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-01 01:01:46.581813 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-01 01:01:46.581822 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-01 01:01:46.581831 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-01 01:01:46.581839 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-01 01:01:46.581848 | orchestrator | 2026-01-01 01:01:46.581857 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-01 01:01:46.581866 | orchestrator | Thursday 01 January 2026 00:50:28 +0000 (0:00:03.911) 0:01:01.511 ****** 2026-01-01 01:01:46.581874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-01 01:01:46.581883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-01 01:01:46.581892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-01 01:01:46.581901 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.581910 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-01 01:01:46.581918 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-01 01:01:46.581927 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-01 01:01:46.581936 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.581946 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-01 01:01:46.581979 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-01 01:01:46.581990 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-01 01:01:46.581999 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.582007 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 01:01:46.582065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 01:01:46.582076 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 01:01:46.582085 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.582093 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-01 01:01:46.582102 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-01 01:01:46.582111 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-01 01:01:46.582120 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-01 01:01:46.582129 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-01 01:01:46.582137 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.582146 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-01 01:01:46.582155 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.582164 | orchestrator | 2026-01-01 01:01:46.582172 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-01 01:01:46.582181 | orchestrator | Thursday 01 January 2026 00:50:30 +0000 (0:00:01.171) 0:01:02.683 ****** 2026-01-01 01:01:46.582190 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.582199 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.582207 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.582217 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.582226 | orchestrator | 2026-01-01 01:01:46.582235 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-01 01:01:46.582244 | orchestrator | Thursday 01 January 2026 00:50:31 +0000 (0:00:01.517) 0:01:04.200 ****** 2026-01-01 01:01:46.582258 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.582267 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.582276 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.582285 | orchestrator | 2026-01-01 01:01:46.582294 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-01 01:01:46.582302 | orchestrator | Thursday 01 January 2026 00:50:31 +0000 (0:00:00.430) 0:01:04.630 ****** 2026-01-01 01:01:46.582311 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.582328 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.582337 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.582346 | orchestrator | 2026-01-01 01:01:46.582355 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-01 01:01:46.582364 | orchestrator | Thursday 01 January 2026 00:50:32 +0000 (0:00:00.425) 0:01:05.056 ****** 2026-01-01 01:01:46.582372 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.582381 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.582390 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.582399 | orchestrator | 2026-01-01 01:01:46.582408 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-01 01:01:46.582417 | orchestrator | Thursday 01 January 2026 00:50:33 +0000 (0:00:00.612) 0:01:05.668 ****** 2026-01-01 01:01:46.582427 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.582460 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.582480 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.582495 | orchestrator | 2026-01-01 01:01:46.582505 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-01 01:01:46.582514 | orchestrator | Thursday 01 January 2026 00:50:33 +0000 (0:00:00.808) 0:01:06.477 ****** 2026-01-01 01:01:46.582523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.582531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.582540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.582549 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.582557 | orchestrator | 2026-01-01 01:01:46.582566 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-01 01:01:46.582575 | orchestrator | Thursday 01 January 2026 00:50:34 +0000 (0:00:00.412) 0:01:06.889 ****** 2026-01-01 01:01:46.582584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.582593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.582602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.582611 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.582620 | orchestrator | 2026-01-01 01:01:46.582629 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-01 01:01:46.582638 | orchestrator | Thursday 01 January 2026 00:50:34 +0000 (0:00:00.427) 0:01:07.316 ****** 2026-01-01 01:01:46.582647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.582655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.582664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.582673 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.582682 | orchestrator | 2026-01-01 01:01:46.582691 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-01 01:01:46.582699 | orchestrator | Thursday 01 January 2026 00:50:35 +0000 (0:00:00.419) 0:01:07.735 ****** 2026-01-01 01:01:46.582708 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.582717 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.582725 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.582734 | orchestrator | 2026-01-01 01:01:46.582743 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-01 01:01:46.582751 | orchestrator | Thursday 01 January 2026 00:50:35 +0000 (0:00:00.658) 0:01:08.394 ****** 2026-01-01 01:01:46.582760 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-01 01:01:46.582769 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-01 01:01:46.582784 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-01 01:01:46.582793 | orchestrator | 2026-01-01 01:01:46.582802 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-01 01:01:46.582811 | orchestrator | Thursday 01 January 2026 00:50:37 +0000 (0:00:01.515) 0:01:09.909 ****** 2026-01-01 01:01:46.582820 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 01:01:46.582840 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 01:01:46.582849 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 01:01:46.582858 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-01 01:01:46.582867 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-01 01:01:46.582876 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-01 01:01:46.582884 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-01 01:01:46.582893 | orchestrator | 2026-01-01 01:01:46.582902 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-01 01:01:46.582910 | orchestrator | Thursday 01 January 2026 00:50:38 +0000 (0:00:00.871) 0:01:10.780 ****** 2026-01-01 01:01:46.582919 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 01:01:46.582927 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 01:01:46.582936 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 01:01:46.582945 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-01 01:01:46.582953 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-01 01:01:46.582962 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-01 01:01:46.582976 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-01 01:01:46.582985 | orchestrator | 2026-01-01 01:01:46.582994 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 01:01:46.583003 | orchestrator | Thursday 01 January 2026 00:50:40 +0000 (0:00:02.187) 0:01:12.968 ****** 2026-01-01 01:01:46.583012 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.583022 | orchestrator | 2026-01-01 01:01:46.583031 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 01:01:46.583039 | orchestrator | Thursday 01 January 2026 00:50:41 +0000 (0:00:01.318) 0:01:14.286 ****** 2026-01-01 01:01:46.583048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.583063 | orchestrator | 2026-01-01 01:01:46.583078 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 01:01:46.583092 | orchestrator | Thursday 01 January 2026 00:50:43 +0000 (0:00:01.492) 0:01:15.779 ****** 2026-01-01 01:01:46.583106 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.583121 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.583135 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.583150 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.583166 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.583182 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.583195 | orchestrator | 2026-01-01 01:01:46.583204 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 01:01:46.583212 | orchestrator | Thursday 01 January 2026 00:50:44 +0000 (0:00:01.558) 0:01:17.338 ****** 2026-01-01 01:01:46.583222 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.583230 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.583239 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.583248 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.583257 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.583266 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.583275 | orchestrator | 2026-01-01 01:01:46.583283 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 01:01:46.583292 | orchestrator | Thursday 01 January 2026 00:50:45 +0000 (0:00:01.003) 0:01:18.341 ****** 2026-01-01 01:01:46.583309 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.583319 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.583327 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.583337 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.583345 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.583354 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.583363 | orchestrator | 2026-01-01 01:01:46.583372 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 01:01:46.583381 | orchestrator | Thursday 01 January 2026 00:50:46 +0000 (0:00:00.894) 0:01:19.235 ****** 2026-01-01 01:01:46.583390 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.583399 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.583408 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.583416 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.583425 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.583483 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.583495 | orchestrator | 2026-01-01 01:01:46.583504 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 01:01:46.583513 | orchestrator | Thursday 01 January 2026 00:50:47 +0000 (0:00:00.923) 0:01:20.159 ****** 2026-01-01 01:01:46.583523 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.583532 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.583548 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.583562 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.583577 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.583610 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.583628 | orchestrator | 2026-01-01 01:01:46.583644 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 01:01:46.583656 | orchestrator | Thursday 01 January 2026 00:50:48 +0000 (0:00:01.467) 0:01:21.627 ****** 2026-01-01 01:01:46.583664 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.583674 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.583682 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.583691 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.583700 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.583709 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.583718 | orchestrator | 2026-01-01 01:01:46.583727 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 01:01:46.583735 | orchestrator | Thursday 01 January 2026 00:50:49 +0000 (0:00:00.876) 0:01:22.504 ****** 2026-01-01 01:01:46.583744 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.583758 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.583772 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.583787 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.583801 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.583817 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.583833 | orchestrator | 2026-01-01 01:01:46.583848 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 01:01:46.583863 | orchestrator | Thursday 01 January 2026 00:50:51 +0000 (0:00:01.241) 0:01:23.745 ****** 2026-01-01 01:01:46.583876 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.583889 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.583904 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.583919 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.583933 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.583942 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.583950 | orchestrator | 2026-01-01 01:01:46.583958 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 01:01:46.583967 | orchestrator | Thursday 01 January 2026 00:50:52 +0000 (0:00:01.586) 0:01:25.331 ****** 2026-01-01 01:01:46.583976 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.583984 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.583992 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.584009 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.584027 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.584041 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.584055 | orchestrator | 2026-01-01 01:01:46.584068 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 01:01:46.584082 | orchestrator | Thursday 01 January 2026 00:50:54 +0000 (0:00:01.686) 0:01:27.018 ****** 2026-01-01 01:01:46.584094 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.584107 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.584121 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.584136 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.584151 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.584165 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.584179 | orchestrator | 2026-01-01 01:01:46.584192 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 01:01:46.584206 | orchestrator | Thursday 01 January 2026 00:50:55 +0000 (0:00:01.390) 0:01:28.408 ****** 2026-01-01 01:01:46.584220 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.584234 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.584250 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.584259 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.584267 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.584279 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.584293 | orchestrator | 2026-01-01 01:01:46.584307 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 01:01:46.584321 | orchestrator | Thursday 01 January 2026 00:50:57 +0000 (0:00:01.729) 0:01:30.137 ****** 2026-01-01 01:01:46.584334 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.584347 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.584361 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.584375 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.584390 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.584403 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.584417 | orchestrator | 2026-01-01 01:01:46.584432 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 01:01:46.584467 | orchestrator | Thursday 01 January 2026 00:50:58 +0000 (0:00:01.474) 0:01:31.612 ****** 2026-01-01 01:01:46.584476 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.584485 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.584499 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.584512 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.584525 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.584539 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.584553 | orchestrator | 2026-01-01 01:01:46.584566 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 01:01:46.584580 | orchestrator | Thursday 01 January 2026 00:51:01 +0000 (0:00:02.518) 0:01:34.131 ****** 2026-01-01 01:01:46.584593 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.584607 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.584620 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.584633 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.584645 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.584657 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.584670 | orchestrator | 2026-01-01 01:01:46.584683 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 01:01:46.584695 | orchestrator | Thursday 01 January 2026 00:51:03 +0000 (0:00:01.666) 0:01:35.798 ****** 2026-01-01 01:01:46.584708 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.584721 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.584735 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.584749 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.584762 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.584775 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.584789 | orchestrator | 2026-01-01 01:01:46.584802 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 01:01:46.584828 | orchestrator | Thursday 01 January 2026 00:51:04 +0000 (0:00:01.068) 0:01:36.867 ****** 2026-01-01 01:01:46.584844 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.584857 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.584871 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.584884 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.584909 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.584924 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.584938 | orchestrator | 2026-01-01 01:01:46.584952 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 01:01:46.584965 | orchestrator | Thursday 01 January 2026 00:51:05 +0000 (0:00:01.237) 0:01:38.105 ****** 2026-01-01 01:01:46.584979 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.584992 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.585005 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.585019 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.585033 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.585046 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.585060 | orchestrator | 2026-01-01 01:01:46.585073 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 01:01:46.585087 | orchestrator | Thursday 01 January 2026 00:51:06 +0000 (0:00:01.173) 0:01:39.278 ****** 2026-01-01 01:01:46.585100 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.585114 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.585128 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.585142 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.585156 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.585170 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.585184 | orchestrator | 2026-01-01 01:01:46.585198 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 01:01:46.585211 | orchestrator | Thursday 01 January 2026 00:51:07 +0000 (0:00:01.142) 0:01:40.420 ****** 2026-01-01 01:01:46.585225 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.585239 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.585252 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.585265 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.585279 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.585294 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.585307 | orchestrator | 2026-01-01 01:01:46.585322 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-01 01:01:46.585337 | orchestrator | Thursday 01 January 2026 00:51:09 +0000 (0:00:01.750) 0:01:42.171 ****** 2026-01-01 01:01:46.585351 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.585364 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.585379 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.585392 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.585413 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.585428 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.585463 | orchestrator | 2026-01-01 01:01:46.585478 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-01 01:01:46.585491 | orchestrator | Thursday 01 January 2026 00:51:11 +0000 (0:00:01.792) 0:01:43.963 ****** 2026-01-01 01:01:46.585505 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.585519 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.585533 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.585546 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.585560 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.585573 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.585587 | orchestrator | 2026-01-01 01:01:46.585600 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-01 01:01:46.585614 | orchestrator | Thursday 01 January 2026 00:51:13 +0000 (0:00:02.662) 0:01:46.625 ****** 2026-01-01 01:01:46.585628 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.585656 | orchestrator | 2026-01-01 01:01:46.585669 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-01 01:01:46.585683 | orchestrator | Thursday 01 January 2026 00:51:15 +0000 (0:00:01.966) 0:01:48.592 ****** 2026-01-01 01:01:46.585696 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.585709 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.585723 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.585736 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.585748 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.585760 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.585771 | orchestrator | 2026-01-01 01:01:46.585783 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-01 01:01:46.585796 | orchestrator | Thursday 01 January 2026 00:51:16 +0000 (0:00:00.681) 0:01:49.273 ****** 2026-01-01 01:01:46.585809 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.585821 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.585836 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.585850 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.585863 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.585877 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.585891 | orchestrator | 2026-01-01 01:01:46.585903 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-01 01:01:46.585916 | orchestrator | Thursday 01 January 2026 00:51:17 +0000 (0:00:00.969) 0:01:50.243 ****** 2026-01-01 01:01:46.585930 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 01:01:46.585945 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 01:01:46.585958 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 01:01:46.585971 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 01:01:46.585985 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 01:01:46.585998 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 01:01:46.586011 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 01:01:46.586075 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 01:01:46.586090 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 01:01:46.586115 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 01:01:46.586143 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 01:01:46.586156 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 01:01:46.586170 | orchestrator | 2026-01-01 01:01:46.586183 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-01 01:01:46.586197 | orchestrator | Thursday 01 January 2026 00:51:19 +0000 (0:00:02.014) 0:01:52.257 ****** 2026-01-01 01:01:46.586211 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.586224 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.586239 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.586253 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.586268 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.586282 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.586295 | orchestrator | 2026-01-01 01:01:46.586308 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-01 01:01:46.586321 | orchestrator | Thursday 01 January 2026 00:51:20 +0000 (0:00:01.349) 0:01:53.607 ****** 2026-01-01 01:01:46.586334 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.586346 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.586373 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.586386 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.586400 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.586412 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.586425 | orchestrator | 2026-01-01 01:01:46.586504 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-01 01:01:46.586522 | orchestrator | Thursday 01 January 2026 00:51:21 +0000 (0:00:00.705) 0:01:54.312 ****** 2026-01-01 01:01:46.586536 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.586549 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.586565 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.586580 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.586594 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.586607 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.586620 | orchestrator | 2026-01-01 01:01:46.586632 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-01 01:01:46.586655 | orchestrator | Thursday 01 January 2026 00:51:22 +0000 (0:00:00.847) 0:01:55.160 ****** 2026-01-01 01:01:46.586669 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.586682 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.586695 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.586709 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.586721 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.586733 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.586746 | orchestrator | 2026-01-01 01:01:46.586759 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-01 01:01:46.586772 | orchestrator | Thursday 01 January 2026 00:51:23 +0000 (0:00:00.634) 0:01:55.794 ****** 2026-01-01 01:01:46.586785 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.586800 | orchestrator | 2026-01-01 01:01:46.586813 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-01 01:01:46.586826 | orchestrator | Thursday 01 January 2026 00:51:24 +0000 (0:00:01.262) 0:01:57.056 ****** 2026-01-01 01:01:46.586841 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.586855 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.586867 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.586881 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.586894 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.586906 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.586916 | orchestrator | 2026-01-01 01:01:46.586927 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-01 01:01:46.586939 | orchestrator | Thursday 01 January 2026 00:52:24 +0000 (0:00:59.791) 0:02:56.848 ****** 2026-01-01 01:01:46.586950 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 01:01:46.586961 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 01:01:46.586972 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 01:01:46.586983 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.586995 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 01:01:46.587006 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 01:01:46.587019 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 01:01:46.587029 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.587038 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 01:01:46.587048 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 01:01:46.587058 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 01:01:46.587069 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.587094 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 01:01:46.587105 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 01:01:46.587116 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 01:01:46.587127 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.587138 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 01:01:46.587149 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 01:01:46.587160 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 01:01:46.587172 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.587208 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 01:01:46.587221 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 01:01:46.587232 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 01:01:46.587244 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.587255 | orchestrator | 2026-01-01 01:01:46.587265 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-01 01:01:46.587276 | orchestrator | Thursday 01 January 2026 00:52:24 +0000 (0:00:00.789) 0:02:57.638 ****** 2026-01-01 01:01:46.587289 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.587300 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.587310 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.587320 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.587331 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.587343 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.587355 | orchestrator | 2026-01-01 01:01:46.587366 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-01 01:01:46.587376 | orchestrator | Thursday 01 January 2026 00:52:26 +0000 (0:00:01.113) 0:02:58.751 ****** 2026-01-01 01:01:46.587387 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.587397 | orchestrator | 2026-01-01 01:01:46.587408 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-01 01:01:46.587419 | orchestrator | Thursday 01 January 2026 00:52:26 +0000 (0:00:00.150) 0:02:58.902 ****** 2026-01-01 01:01:46.587431 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.587466 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.587478 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.587490 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.587501 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.587512 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.587523 | orchestrator | 2026-01-01 01:01:46.587534 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-01 01:01:46.587546 | orchestrator | Thursday 01 January 2026 00:52:27 +0000 (0:00:00.759) 0:02:59.661 ****** 2026-01-01 01:01:46.587557 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.587569 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.587589 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.587600 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.587611 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.587622 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.587633 | orchestrator | 2026-01-01 01:01:46.587645 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-01 01:01:46.587656 | orchestrator | Thursday 01 January 2026 00:52:27 +0000 (0:00:00.854) 0:03:00.516 ****** 2026-01-01 01:01:46.587668 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.587679 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.587690 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.587701 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.587712 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.587724 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.587746 | orchestrator | 2026-01-01 01:01:46.587758 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-01 01:01:46.587769 | orchestrator | Thursday 01 January 2026 00:52:28 +0000 (0:00:00.620) 0:03:01.137 ****** 2026-01-01 01:01:46.587779 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.587790 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.587801 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.587813 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.587824 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.587836 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.587848 | orchestrator | 2026-01-01 01:01:46.587861 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-01 01:01:46.587872 | orchestrator | Thursday 01 January 2026 00:52:31 +0000 (0:00:02.861) 0:03:03.998 ****** 2026-01-01 01:01:46.587885 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.587897 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.587907 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.587918 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.587929 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.587940 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.587951 | orchestrator | 2026-01-01 01:01:46.587962 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-01 01:01:46.587972 | orchestrator | Thursday 01 January 2026 00:52:31 +0000 (0:00:00.630) 0:03:04.629 ****** 2026-01-01 01:01:46.587984 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.587997 | orchestrator | 2026-01-01 01:01:46.588008 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-01 01:01:46.588019 | orchestrator | Thursday 01 January 2026 00:52:33 +0000 (0:00:01.238) 0:03:05.868 ****** 2026-01-01 01:01:46.588029 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.588041 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.588052 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.588065 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.588075 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.588086 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.588096 | orchestrator | 2026-01-01 01:01:46.588107 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-01 01:01:46.588118 | orchestrator | Thursday 01 January 2026 00:52:34 +0000 (0:00:00.896) 0:03:06.765 ****** 2026-01-01 01:01:46.588129 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.588140 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.588151 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.588162 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.588173 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.588184 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.588196 | orchestrator | 2026-01-01 01:01:46.588207 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-01 01:01:46.588218 | orchestrator | Thursday 01 January 2026 00:52:34 +0000 (0:00:00.881) 0:03:07.647 ****** 2026-01-01 01:01:46.588230 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.588241 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.588268 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.588281 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.588291 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.588302 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.588312 | orchestrator | 2026-01-01 01:01:46.588323 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-01 01:01:46.588334 | orchestrator | Thursday 01 January 2026 00:52:36 +0000 (0:00:01.090) 0:03:08.737 ****** 2026-01-01 01:01:46.588344 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.588355 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.588365 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.588392 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.588404 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.588414 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.588425 | orchestrator | 2026-01-01 01:01:46.588499 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-01 01:01:46.588511 | orchestrator | Thursday 01 January 2026 00:52:37 +0000 (0:00:01.040) 0:03:09.778 ****** 2026-01-01 01:01:46.588523 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.588533 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.588544 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.588555 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.588566 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.588575 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.588585 | orchestrator | 2026-01-01 01:01:46.588595 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-01 01:01:46.588606 | orchestrator | Thursday 01 January 2026 00:52:38 +0000 (0:00:01.268) 0:03:11.046 ****** 2026-01-01 01:01:46.588617 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.588627 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.588637 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.588646 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.588655 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.588665 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.588676 | orchestrator | 2026-01-01 01:01:46.588685 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-01 01:01:46.588696 | orchestrator | Thursday 01 January 2026 00:52:39 +0000 (0:00:00.917) 0:03:11.964 ****** 2026-01-01 01:01:46.588717 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.588727 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.588737 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.588748 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.588758 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.588767 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.588777 | orchestrator | 2026-01-01 01:01:46.588788 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-01 01:01:46.588798 | orchestrator | Thursday 01 January 2026 00:52:40 +0000 (0:00:01.156) 0:03:13.120 ****** 2026-01-01 01:01:46.588809 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.588819 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.588829 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.588838 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.588847 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.588855 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.588864 | orchestrator | 2026-01-01 01:01:46.588874 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-01 01:01:46.588884 | orchestrator | Thursday 01 January 2026 00:52:41 +0000 (0:00:00.907) 0:03:14.028 ****** 2026-01-01 01:01:46.588894 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.588905 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.588915 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.588925 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.588936 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.588947 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.588956 | orchestrator | 2026-01-01 01:01:46.588966 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-01 01:01:46.588976 | orchestrator | Thursday 01 January 2026 00:52:43 +0000 (0:00:01.972) 0:03:16.001 ****** 2026-01-01 01:01:46.588988 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.589000 | orchestrator | 2026-01-01 01:01:46.589009 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-01 01:01:46.589018 | orchestrator | Thursday 01 January 2026 00:52:44 +0000 (0:00:01.586) 0:03:17.587 ****** 2026-01-01 01:01:46.589045 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-01 01:01:46.589055 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-01 01:01:46.589065 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-01 01:01:46.589075 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-01 01:01:46.589085 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-01 01:01:46.589094 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-01 01:01:46.589104 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-01 01:01:46.589115 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-01 01:01:46.589125 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-01 01:01:46.589134 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-01 01:01:46.589144 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-01 01:01:46.589155 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-01 01:01:46.589165 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-01 01:01:46.589176 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-01 01:01:46.589186 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-01 01:01:46.589195 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-01 01:01:46.589206 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-01 01:01:46.589216 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-01 01:01:46.589238 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-01 01:01:46.589249 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-01 01:01:46.589259 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-01 01:01:46.589269 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-01 01:01:46.589278 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-01 01:01:46.589288 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-01 01:01:46.589298 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-01 01:01:46.589308 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-01 01:01:46.589317 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-01 01:01:46.589327 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-01 01:01:46.589337 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-01 01:01:46.589348 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-01 01:01:46.589357 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-01 01:01:46.589366 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-01 01:01:46.589376 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-01 01:01:46.589386 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-01 01:01:46.589395 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-01 01:01:46.589406 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-01 01:01:46.589416 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-01 01:01:46.589426 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-01 01:01:46.589457 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-01 01:01:46.589467 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-01 01:01:46.589477 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-01 01:01:46.589496 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-01 01:01:46.589507 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-01 01:01:46.589518 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-01 01:01:46.589538 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-01 01:01:46.589547 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-01 01:01:46.589557 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 01:01:46.589567 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-01 01:01:46.589577 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 01:01:46.589587 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 01:01:46.589597 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 01:01:46.589607 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 01:01:46.589616 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 01:01:46.589626 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 01:01:46.589637 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 01:01:46.589647 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 01:01:46.589658 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-01 01:01:46.589668 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 01:01:46.589678 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 01:01:46.589688 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 01:01:46.589699 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 01:01:46.589708 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 01:01:46.589718 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 01:01:46.589728 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 01:01:46.589737 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 01:01:46.589747 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 01:01:46.589757 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 01:01:46.589769 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 01:01:46.589776 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 01:01:46.589783 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 01:01:46.589789 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 01:01:46.589795 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 01:01:46.589801 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 01:01:46.589807 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 01:01:46.589814 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 01:01:46.589821 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 01:01:46.589838 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 01:01:46.589845 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 01:01:46.589851 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 01:01:46.589858 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 01:01:46.589864 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-01 01:01:46.589870 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 01:01:46.589877 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 01:01:46.589892 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-01 01:01:46.589899 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-01 01:01:46.589905 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-01 01:01:46.589911 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-01 01:01:46.589917 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 01:01:46.589924 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-01 01:01:46.589930 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-01 01:01:46.589936 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-01 01:01:46.589942 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-01 01:01:46.589949 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 01:01:46.589955 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-01 01:01:46.589961 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-01 01:01:46.589968 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-01 01:01:46.589974 | orchestrator | 2026-01-01 01:01:46.589980 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-01 01:01:46.589987 | orchestrator | Thursday 01 January 2026 00:52:52 +0000 (0:00:08.013) 0:03:25.601 ****** 2026-01-01 01:01:46.589999 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590006 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590094 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590106 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-01-01 01:01:46.590114 | orchestrator | 2026-01-01 01:01:46.590120 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-01 01:01:46.590126 | orchestrator | Thursday 01 January 2026 00:52:54 +0000 (0:00:01.057) 0:03:26.658 ****** 2026-01-01 01:01:46.590132 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.590139 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.590145 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.590152 | orchestrator | 2026-01-01 01:01:46.590158 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-01 01:01:46.590164 | orchestrator | Thursday 01 January 2026 00:52:55 +0000 (0:00:01.171) 0:03:27.830 ****** 2026-01-01 01:01:46.590170 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.590177 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.590183 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.590190 | orchestrator | 2026-01-01 01:01:46.590196 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-01 01:01:46.590202 | orchestrator | Thursday 01 January 2026 00:52:56 +0000 (0:00:01.542) 0:03:29.372 ****** 2026-01-01 01:01:46.590208 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.590215 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.590221 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.590227 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590233 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590239 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590246 | orchestrator | 2026-01-01 01:01:46.590252 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-01 01:01:46.590264 | orchestrator | Thursday 01 January 2026 00:52:57 +0000 (0:00:00.569) 0:03:29.942 ****** 2026-01-01 01:01:46.590270 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.590276 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.590283 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.590289 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590295 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590302 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590313 | orchestrator | 2026-01-01 01:01:46.590323 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-01 01:01:46.590334 | orchestrator | Thursday 01 January 2026 00:52:58 +0000 (0:00:00.833) 0:03:30.775 ****** 2026-01-01 01:01:46.590344 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.590356 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.590366 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.590377 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590389 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590399 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590412 | orchestrator | 2026-01-01 01:01:46.590476 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-01 01:01:46.590485 | orchestrator | Thursday 01 January 2026 00:52:58 +0000 (0:00:00.619) 0:03:31.395 ****** 2026-01-01 01:01:46.590492 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.590498 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.590504 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.590510 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590517 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590523 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590529 | orchestrator | 2026-01-01 01:01:46.590536 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-01 01:01:46.590542 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:00.737) 0:03:32.132 ****** 2026-01-01 01:01:46.590548 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.590555 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.590561 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590567 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590573 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.590579 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590586 | orchestrator | 2026-01-01 01:01:46.590592 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-01 01:01:46.590598 | orchestrator | Thursday 01 January 2026 00:53:00 +0000 (0:00:00.541) 0:03:32.674 ****** 2026-01-01 01:01:46.590605 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.590611 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.590617 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.590623 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590629 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590636 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590642 | orchestrator | 2026-01-01 01:01:46.590648 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-01 01:01:46.590654 | orchestrator | Thursday 01 January 2026 00:53:00 +0000 (0:00:00.763) 0:03:33.437 ****** 2026-01-01 01:01:46.590661 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.590667 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.590674 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590680 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.590692 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590698 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590704 | orchestrator | 2026-01-01 01:01:46.590710 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-01 01:01:46.590717 | orchestrator | Thursday 01 January 2026 00:53:01 +0000 (0:00:01.020) 0:03:34.457 ****** 2026-01-01 01:01:46.590729 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.590735 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.590742 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.590748 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590754 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590760 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590766 | orchestrator | 2026-01-01 01:01:46.590773 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-01 01:01:46.590779 | orchestrator | Thursday 01 January 2026 00:53:02 +0000 (0:00:00.934) 0:03:35.392 ****** 2026-01-01 01:01:46.590785 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590791 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590797 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590804 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.590810 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.590816 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.590822 | orchestrator | 2026-01-01 01:01:46.590829 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-01 01:01:46.590835 | orchestrator | Thursday 01 January 2026 00:53:05 +0000 (0:00:03.060) 0:03:38.453 ****** 2026-01-01 01:01:46.590841 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.590847 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.590854 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.590860 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590866 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590872 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590878 | orchestrator | 2026-01-01 01:01:46.590885 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-01 01:01:46.590891 | orchestrator | Thursday 01 January 2026 00:53:06 +0000 (0:00:01.145) 0:03:39.599 ****** 2026-01-01 01:01:46.590897 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.590903 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.590910 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.590916 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590922 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590929 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590935 | orchestrator | 2026-01-01 01:01:46.590941 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-01 01:01:46.590947 | orchestrator | Thursday 01 January 2026 00:53:07 +0000 (0:00:00.739) 0:03:40.339 ****** 2026-01-01 01:01:46.590954 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.590960 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.590966 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.590972 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.590978 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.590984 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.590991 | orchestrator | 2026-01-01 01:01:46.590997 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-01 01:01:46.591003 | orchestrator | Thursday 01 January 2026 00:53:08 +0000 (0:00:00.924) 0:03:41.263 ****** 2026-01-01 01:01:46.591010 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.591016 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.591022 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.591029 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.591058 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.591070 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.591080 | orchestrator | 2026-01-01 01:01:46.591090 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-01 01:01:46.591107 | orchestrator | Thursday 01 January 2026 00:53:09 +0000 (0:00:01.044) 0:03:42.308 ****** 2026-01-01 01:01:46.591121 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-01 01:01:46.591134 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-01 01:01:46.591143 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.591149 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-01 01:01:46.591160 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-01 01:01:46.591166 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.591173 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-01 01:01:46.591179 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-01 01:01:46.591186 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.591192 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.591198 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.591204 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.591211 | orchestrator | 2026-01-01 01:01:46.591217 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-01 01:01:46.591224 | orchestrator | Thursday 01 January 2026 00:53:10 +0000 (0:00:01.194) 0:03:43.502 ****** 2026-01-01 01:01:46.591230 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.591236 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.591242 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.591249 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.591255 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.591261 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.591267 | orchestrator | 2026-01-01 01:01:46.591273 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-01 01:01:46.591280 | orchestrator | Thursday 01 January 2026 00:53:11 +0000 (0:00:00.704) 0:03:44.206 ****** 2026-01-01 01:01:46.591286 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.591292 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.591298 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.591305 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.591311 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.591317 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.591323 | orchestrator | 2026-01-01 01:01:46.591329 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-01 01:01:46.591336 | orchestrator | Thursday 01 January 2026 00:53:12 +0000 (0:00:00.852) 0:03:45.059 ****** 2026-01-01 01:01:46.591347 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.591353 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.591359 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.591365 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.591372 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.591378 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.591384 | orchestrator | 2026-01-01 01:01:46.591390 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-01 01:01:46.591396 | orchestrator | Thursday 01 January 2026 00:53:13 +0000 (0:00:00.739) 0:03:45.799 ****** 2026-01-01 01:01:46.591402 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.591409 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.591415 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.591421 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.591427 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.591478 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.591487 | orchestrator | 2026-01-01 01:01:46.591494 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-01 01:01:46.591525 | orchestrator | Thursday 01 January 2026 00:53:14 +0000 (0:00:01.017) 0:03:46.816 ****** 2026-01-01 01:01:46.591532 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.591539 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.591545 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.591551 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.591557 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.591563 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.591570 | orchestrator | 2026-01-01 01:01:46.591576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-01 01:01:46.591583 | orchestrator | Thursday 01 January 2026 00:53:14 +0000 (0:00:00.621) 0:03:47.438 ****** 2026-01-01 01:01:46.591589 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.591595 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.591602 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.591608 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.591614 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.591620 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.591626 | orchestrator | 2026-01-01 01:01:46.591633 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-01 01:01:46.591639 | orchestrator | Thursday 01 January 2026 00:53:15 +0000 (0:00:00.747) 0:03:48.186 ****** 2026-01-01 01:01:46.591645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.591652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.591658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.591664 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.591670 | orchestrator | 2026-01-01 01:01:46.591676 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-01 01:01:46.591683 | orchestrator | Thursday 01 January 2026 00:53:15 +0000 (0:00:00.363) 0:03:48.549 ****** 2026-01-01 01:01:46.591689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.591696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.591706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.591718 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.591730 | orchestrator | 2026-01-01 01:01:46.591746 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-01 01:01:46.591756 | orchestrator | Thursday 01 January 2026 00:53:16 +0000 (0:00:00.417) 0:03:48.967 ****** 2026-01-01 01:01:46.591766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.591775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.591785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.591802 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.591811 | orchestrator | 2026-01-01 01:01:46.591819 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-01 01:01:46.591828 | orchestrator | Thursday 01 January 2026 00:53:16 +0000 (0:00:00.301) 0:03:49.268 ****** 2026-01-01 01:01:46.591837 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.591845 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.591854 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.591862 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.591871 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.591879 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.591887 | orchestrator | 2026-01-01 01:01:46.591897 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-01 01:01:46.591906 | orchestrator | Thursday 01 January 2026 00:53:17 +0000 (0:00:00.568) 0:03:49.836 ****** 2026-01-01 01:01:46.591916 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-01 01:01:46.591926 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-01 01:01:46.591935 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-01 01:01:46.591945 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-01 01:01:46.591954 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.591963 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-01 01:01:46.591972 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.591981 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-01 01:01:46.591990 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.591999 | orchestrator | 2026-01-01 01:01:46.592008 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-01 01:01:46.592018 | orchestrator | Thursday 01 January 2026 00:53:19 +0000 (0:00:01.952) 0:03:51.789 ****** 2026-01-01 01:01:46.592027 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.592037 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.592047 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.592057 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.592066 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.592076 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.592086 | orchestrator | 2026-01-01 01:01:46.592096 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 01:01:46.592106 | orchestrator | Thursday 01 January 2026 00:53:21 +0000 (0:00:02.561) 0:03:54.351 ****** 2026-01-01 01:01:46.592116 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.592125 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.592135 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.592144 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.592154 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.592163 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.592172 | orchestrator | 2026-01-01 01:01:46.592182 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-01 01:01:46.592192 | orchestrator | Thursday 01 January 2026 00:53:22 +0000 (0:00:01.043) 0:03:55.394 ****** 2026-01-01 01:01:46.592201 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.592211 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.592222 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.592232 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.592243 | orchestrator | 2026-01-01 01:01:46.592253 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-01 01:01:46.592311 | orchestrator | Thursday 01 January 2026 00:53:23 +0000 (0:00:01.218) 0:03:56.612 ****** 2026-01-01 01:01:46.592323 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.592333 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.592343 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.592352 | orchestrator | 2026-01-01 01:01:46.592361 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-01 01:01:46.592379 | orchestrator | Thursday 01 January 2026 00:53:24 +0000 (0:00:00.421) 0:03:57.034 ****** 2026-01-01 01:01:46.592388 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.592396 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.592405 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.592413 | orchestrator | 2026-01-01 01:01:46.592421 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-01 01:01:46.592430 | orchestrator | Thursday 01 January 2026 00:53:25 +0000 (0:00:01.503) 0:03:58.537 ****** 2026-01-01 01:01:46.592459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 01:01:46.592468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 01:01:46.592476 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 01:01:46.592485 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.592493 | orchestrator | 2026-01-01 01:01:46.592501 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-01 01:01:46.592510 | orchestrator | Thursday 01 January 2026 00:53:27 +0000 (0:00:01.174) 0:03:59.711 ****** 2026-01-01 01:01:46.592518 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.592527 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.592535 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.592544 | orchestrator | 2026-01-01 01:01:46.592553 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-01 01:01:46.592562 | orchestrator | Thursday 01 January 2026 00:53:27 +0000 (0:00:00.342) 0:04:00.054 ****** 2026-01-01 01:01:46.592571 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.592581 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.592590 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.592599 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.592609 | orchestrator | 2026-01-01 01:01:46.592626 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-01 01:01:46.592636 | orchestrator | Thursday 01 January 2026 00:53:28 +0000 (0:00:01.016) 0:04:01.070 ****** 2026-01-01 01:01:46.592645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.592654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.592664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.592674 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.592684 | orchestrator | 2026-01-01 01:01:46.592694 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-01 01:01:46.592705 | orchestrator | Thursday 01 January 2026 00:53:29 +0000 (0:00:00.938) 0:04:02.009 ****** 2026-01-01 01:01:46.592715 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.592724 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.592734 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.592743 | orchestrator | 2026-01-01 01:01:46.592752 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-01 01:01:46.592760 | orchestrator | Thursday 01 January 2026 00:53:29 +0000 (0:00:00.375) 0:04:02.384 ****** 2026-01-01 01:01:46.592769 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.592778 | orchestrator | 2026-01-01 01:01:46.592787 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-01 01:01:46.592798 | orchestrator | Thursday 01 January 2026 00:53:29 +0000 (0:00:00.233) 0:04:02.618 ****** 2026-01-01 01:01:46.592807 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.592816 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.592827 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.592836 | orchestrator | 2026-01-01 01:01:46.592846 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-01 01:01:46.592855 | orchestrator | Thursday 01 January 2026 00:53:30 +0000 (0:00:00.413) 0:04:03.031 ****** 2026-01-01 01:01:46.592865 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.592891 | orchestrator | 2026-01-01 01:01:46.592902 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-01 01:01:46.592912 | orchestrator | Thursday 01 January 2026 00:53:30 +0000 (0:00:00.311) 0:04:03.343 ****** 2026-01-01 01:01:46.592921 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.592930 | orchestrator | 2026-01-01 01:01:46.592939 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-01 01:01:46.592947 | orchestrator | Thursday 01 January 2026 00:53:30 +0000 (0:00:00.231) 0:04:03.575 ****** 2026-01-01 01:01:46.592956 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.592964 | orchestrator | 2026-01-01 01:01:46.592973 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-01 01:01:46.592981 | orchestrator | Thursday 01 January 2026 00:53:31 +0000 (0:00:00.188) 0:04:03.764 ****** 2026-01-01 01:01:46.592989 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.592998 | orchestrator | 2026-01-01 01:01:46.593007 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-01 01:01:46.593015 | orchestrator | Thursday 01 January 2026 00:53:31 +0000 (0:00:00.248) 0:04:04.012 ****** 2026-01-01 01:01:46.593024 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.593033 | orchestrator | 2026-01-01 01:01:46.593042 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-01 01:01:46.593050 | orchestrator | Thursday 01 January 2026 00:53:32 +0000 (0:00:00.851) 0:04:04.863 ****** 2026-01-01 01:01:46.593059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.593066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.593075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.593084 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.593092 | orchestrator | 2026-01-01 01:01:46.593100 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-01 01:01:46.593156 | orchestrator | Thursday 01 January 2026 00:53:32 +0000 (0:00:00.443) 0:04:05.307 ****** 2026-01-01 01:01:46.593167 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.593176 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.593185 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.593193 | orchestrator | 2026-01-01 01:01:46.593200 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-01 01:01:46.593209 | orchestrator | Thursday 01 January 2026 00:53:33 +0000 (0:00:00.377) 0:04:05.685 ****** 2026-01-01 01:01:46.593218 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.593226 | orchestrator | 2026-01-01 01:01:46.593234 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-01 01:01:46.593243 | orchestrator | Thursday 01 January 2026 00:53:33 +0000 (0:00:00.239) 0:04:05.925 ****** 2026-01-01 01:01:46.593251 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.593259 | orchestrator | 2026-01-01 01:01:46.593268 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-01 01:01:46.593276 | orchestrator | Thursday 01 January 2026 00:53:33 +0000 (0:00:00.215) 0:04:06.140 ****** 2026-01-01 01:01:46.593285 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.593294 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.593303 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.593312 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-01-01 01:01:46.593321 | orchestrator | 2026-01-01 01:01:46.593330 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-01 01:01:46.593340 | orchestrator | Thursday 01 January 2026 00:53:34 +0000 (0:00:01.123) 0:04:07.264 ****** 2026-01-01 01:01:46.593350 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.593360 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.593369 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.593377 | orchestrator | 2026-01-01 01:01:46.593386 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-01 01:01:46.593406 | orchestrator | Thursday 01 January 2026 00:53:35 +0000 (0:00:00.403) 0:04:07.668 ****** 2026-01-01 01:01:46.593416 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.593425 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.593497 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.593511 | orchestrator | 2026-01-01 01:01:46.593521 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-01 01:01:46.593531 | orchestrator | Thursday 01 January 2026 00:53:36 +0000 (0:00:01.541) 0:04:09.209 ****** 2026-01-01 01:01:46.593540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.593549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.593559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.593568 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.593577 | orchestrator | 2026-01-01 01:01:46.593586 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-01 01:01:46.593595 | orchestrator | Thursday 01 January 2026 00:53:37 +0000 (0:00:00.954) 0:04:10.164 ****** 2026-01-01 01:01:46.593604 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.593614 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.593624 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.593633 | orchestrator | 2026-01-01 01:01:46.593642 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-01 01:01:46.593652 | orchestrator | Thursday 01 January 2026 00:53:38 +0000 (0:00:00.595) 0:04:10.759 ****** 2026-01-01 01:01:46.593661 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.593670 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.593679 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.593689 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.593699 | orchestrator | 2026-01-01 01:01:46.593708 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-01 01:01:46.593717 | orchestrator | Thursday 01 January 2026 00:53:39 +0000 (0:00:00.939) 0:04:11.698 ****** 2026-01-01 01:01:46.593726 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.593736 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.593746 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.593755 | orchestrator | 2026-01-01 01:01:46.593764 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-01 01:01:46.593774 | orchestrator | Thursday 01 January 2026 00:53:39 +0000 (0:00:00.754) 0:04:12.452 ****** 2026-01-01 01:01:46.593783 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.593793 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.593802 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.593812 | orchestrator | 2026-01-01 01:01:46.593822 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-01 01:01:46.593831 | orchestrator | Thursday 01 January 2026 00:53:41 +0000 (0:00:01.664) 0:04:14.117 ****** 2026-01-01 01:01:46.593840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.593850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.593859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.593869 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.593880 | orchestrator | 2026-01-01 01:01:46.593889 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-01 01:01:46.593899 | orchestrator | Thursday 01 January 2026 00:53:42 +0000 (0:00:00.657) 0:04:14.775 ****** 2026-01-01 01:01:46.593909 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.593919 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.593928 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.593938 | orchestrator | 2026-01-01 01:01:46.593947 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-01 01:01:46.593958 | orchestrator | Thursday 01 January 2026 00:53:42 +0000 (0:00:00.351) 0:04:15.127 ****** 2026-01-01 01:01:46.593976 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.593985 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.593994 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.594001 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.594010 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.594099 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.594111 | orchestrator | 2026-01-01 01:01:46.594120 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-01 01:01:46.594128 | orchestrator | Thursday 01 January 2026 00:53:43 +0000 (0:00:00.882) 0:04:16.010 ****** 2026-01-01 01:01:46.594137 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.594146 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.594155 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.594163 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.594171 | orchestrator | 2026-01-01 01:01:46.594179 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-01 01:01:46.594189 | orchestrator | Thursday 01 January 2026 00:53:44 +0000 (0:00:00.836) 0:04:16.846 ****** 2026-01-01 01:01:46.594197 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.594206 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.594215 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.594224 | orchestrator | 2026-01-01 01:01:46.594233 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-01 01:01:46.594242 | orchestrator | Thursday 01 January 2026 00:53:44 +0000 (0:00:00.717) 0:04:17.564 ****** 2026-01-01 01:01:46.594250 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.594259 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.594268 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.594276 | orchestrator | 2026-01-01 01:01:46.594285 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-01 01:01:46.594293 | orchestrator | Thursday 01 January 2026 00:53:46 +0000 (0:00:01.514) 0:04:19.078 ****** 2026-01-01 01:01:46.594302 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 01:01:46.594312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 01:01:46.594321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 01:01:46.594329 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.594338 | orchestrator | 2026-01-01 01:01:46.594347 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-01 01:01:46.594361 | orchestrator | Thursday 01 January 2026 00:53:47 +0000 (0:00:00.627) 0:04:19.706 ****** 2026-01-01 01:01:46.594369 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.594378 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.594387 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.594396 | orchestrator | 2026-01-01 01:01:46.594405 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-01 01:01:46.594413 | orchestrator | 2026-01-01 01:01:46.594422 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 01:01:46.594431 | orchestrator | Thursday 01 January 2026 00:53:47 +0000 (0:00:00.581) 0:04:20.287 ****** 2026-01-01 01:01:46.594459 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.594468 | orchestrator | 2026-01-01 01:01:46.594476 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 01:01:46.594484 | orchestrator | Thursday 01 January 2026 00:53:48 +0000 (0:00:00.907) 0:04:21.195 ****** 2026-01-01 01:01:46.594492 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.594501 | orchestrator | 2026-01-01 01:01:46.594510 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 01:01:46.594519 | orchestrator | Thursday 01 January 2026 00:53:49 +0000 (0:00:00.541) 0:04:21.736 ****** 2026-01-01 01:01:46.594551 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.594560 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.594569 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.594577 | orchestrator | 2026-01-01 01:01:46.594587 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 01:01:46.594595 | orchestrator | Thursday 01 January 2026 00:53:50 +0000 (0:00:01.157) 0:04:22.894 ****** 2026-01-01 01:01:46.594604 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.594612 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.594620 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.594628 | orchestrator | 2026-01-01 01:01:46.594637 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 01:01:46.594646 | orchestrator | Thursday 01 January 2026 00:53:50 +0000 (0:00:00.363) 0:04:23.258 ****** 2026-01-01 01:01:46.594655 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.594664 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.594673 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.594682 | orchestrator | 2026-01-01 01:01:46.594691 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 01:01:46.594700 | orchestrator | Thursday 01 January 2026 00:53:50 +0000 (0:00:00.353) 0:04:23.611 ****** 2026-01-01 01:01:46.594708 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.594717 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.594726 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.594734 | orchestrator | 2026-01-01 01:01:46.594742 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 01:01:46.594752 | orchestrator | Thursday 01 January 2026 00:53:51 +0000 (0:00:00.445) 0:04:24.057 ****** 2026-01-01 01:01:46.594760 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.594768 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.594776 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.594786 | orchestrator | 2026-01-01 01:01:46.594794 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 01:01:46.594802 | orchestrator | Thursday 01 January 2026 00:53:52 +0000 (0:00:01.200) 0:04:25.257 ****** 2026-01-01 01:01:46.594812 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.594820 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.594829 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.594837 | orchestrator | 2026-01-01 01:01:46.594845 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 01:01:46.594853 | orchestrator | Thursday 01 January 2026 00:53:52 +0000 (0:00:00.326) 0:04:25.584 ****** 2026-01-01 01:01:46.594904 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.594917 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.594926 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.594935 | orchestrator | 2026-01-01 01:01:46.594943 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 01:01:46.594952 | orchestrator | Thursday 01 January 2026 00:53:53 +0000 (0:00:00.364) 0:04:25.949 ****** 2026-01-01 01:01:46.594960 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.594968 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.594976 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.594984 | orchestrator | 2026-01-01 01:01:46.594992 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 01:01:46.595000 | orchestrator | Thursday 01 January 2026 00:53:54 +0000 (0:00:00.791) 0:04:26.741 ****** 2026-01-01 01:01:46.595008 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.595017 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.595025 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.595033 | orchestrator | 2026-01-01 01:01:46.595041 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 01:01:46.595050 | orchestrator | Thursday 01 January 2026 00:53:54 +0000 (0:00:00.892) 0:04:27.634 ****** 2026-01-01 01:01:46.595057 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.595073 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.595081 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.595088 | orchestrator | 2026-01-01 01:01:46.595095 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 01:01:46.595103 | orchestrator | Thursday 01 January 2026 00:53:55 +0000 (0:00:00.695) 0:04:28.329 ****** 2026-01-01 01:01:46.595110 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.595117 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.595124 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.595131 | orchestrator | 2026-01-01 01:01:46.595139 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 01:01:46.595146 | orchestrator | Thursday 01 January 2026 00:53:56 +0000 (0:00:00.514) 0:04:28.843 ****** 2026-01-01 01:01:46.595153 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.595161 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.595170 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.595178 | orchestrator | 2026-01-01 01:01:46.595194 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 01:01:46.595203 | orchestrator | Thursday 01 January 2026 00:53:56 +0000 (0:00:00.420) 0:04:29.264 ****** 2026-01-01 01:01:46.595211 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.595219 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.595227 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.595235 | orchestrator | 2026-01-01 01:01:46.595243 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 01:01:46.595252 | orchestrator | Thursday 01 January 2026 00:53:57 +0000 (0:00:00.391) 0:04:29.655 ****** 2026-01-01 01:01:46.595260 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.595268 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.595275 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.595283 | orchestrator | 2026-01-01 01:01:46.595291 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 01:01:46.595298 | orchestrator | Thursday 01 January 2026 00:53:57 +0000 (0:00:00.767) 0:04:30.423 ****** 2026-01-01 01:01:46.595306 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.595313 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.595322 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.595329 | orchestrator | 2026-01-01 01:01:46.595337 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 01:01:46.595344 | orchestrator | Thursday 01 January 2026 00:53:58 +0000 (0:00:00.631) 0:04:31.054 ****** 2026-01-01 01:01:46.595351 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.595359 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.595367 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.595375 | orchestrator | 2026-01-01 01:01:46.595383 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 01:01:46.595391 | orchestrator | Thursday 01 January 2026 00:53:58 +0000 (0:00:00.416) 0:04:31.471 ****** 2026-01-01 01:01:46.595399 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.595406 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.595414 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.595421 | orchestrator | 2026-01-01 01:01:46.595429 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 01:01:46.595457 | orchestrator | Thursday 01 January 2026 00:53:59 +0000 (0:00:00.450) 0:04:31.922 ****** 2026-01-01 01:01:46.595467 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.595474 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.595482 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.595489 | orchestrator | 2026-01-01 01:01:46.595497 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 01:01:46.595504 | orchestrator | Thursday 01 January 2026 00:54:00 +0000 (0:00:00.879) 0:04:32.801 ****** 2026-01-01 01:01:46.595511 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.595518 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.595534 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.595542 | orchestrator | 2026-01-01 01:01:46.595550 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-01 01:01:46.595558 | orchestrator | Thursday 01 January 2026 00:54:00 +0000 (0:00:00.733) 0:04:33.535 ****** 2026-01-01 01:01:46.595565 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.595573 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.595580 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.595587 | orchestrator | 2026-01-01 01:01:46.595594 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-01 01:01:46.595602 | orchestrator | Thursday 01 January 2026 00:54:01 +0000 (0:00:00.303) 0:04:33.838 ****** 2026-01-01 01:01:46.595609 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.595617 | orchestrator | 2026-01-01 01:01:46.595625 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-01 01:01:46.595632 | orchestrator | Thursday 01 January 2026 00:54:01 +0000 (0:00:00.706) 0:04:34.545 ****** 2026-01-01 01:01:46.595639 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.595647 | orchestrator | 2026-01-01 01:01:46.595707 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-01 01:01:46.595719 | orchestrator | Thursday 01 January 2026 00:54:02 +0000 (0:00:00.127) 0:04:34.673 ****** 2026-01-01 01:01:46.595727 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-01 01:01:46.595735 | orchestrator | 2026-01-01 01:01:46.595742 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-01 01:01:46.595750 | orchestrator | Thursday 01 January 2026 00:54:03 +0000 (0:00:01.040) 0:04:35.713 ****** 2026-01-01 01:01:46.595757 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.595765 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.595772 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.595780 | orchestrator | 2026-01-01 01:01:46.595788 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-01 01:01:46.595796 | orchestrator | Thursday 01 January 2026 00:54:03 +0000 (0:00:00.370) 0:04:36.084 ****** 2026-01-01 01:01:46.595803 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.595811 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.595818 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.595826 | orchestrator | 2026-01-01 01:01:46.595834 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-01 01:01:46.595841 | orchestrator | Thursday 01 January 2026 00:54:03 +0000 (0:00:00.399) 0:04:36.484 ****** 2026-01-01 01:01:46.595849 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.595857 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.595864 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.595871 | orchestrator | 2026-01-01 01:01:46.595878 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-01 01:01:46.595885 | orchestrator | Thursday 01 January 2026 00:54:05 +0000 (0:00:01.599) 0:04:38.084 ****** 2026-01-01 01:01:46.595893 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.595900 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.595907 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.595915 | orchestrator | 2026-01-01 01:01:46.595922 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-01 01:01:46.595930 | orchestrator | Thursday 01 January 2026 00:54:06 +0000 (0:00:01.072) 0:04:39.156 ****** 2026-01-01 01:01:46.595937 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.595944 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.595960 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.595968 | orchestrator | 2026-01-01 01:01:46.595975 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-01 01:01:46.595982 | orchestrator | Thursday 01 January 2026 00:54:07 +0000 (0:00:00.704) 0:04:39.861 ****** 2026-01-01 01:01:46.595991 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.595998 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.596014 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.596022 | orchestrator | 2026-01-01 01:01:46.596030 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-01 01:01:46.596037 | orchestrator | Thursday 01 January 2026 00:54:08 +0000 (0:00:00.854) 0:04:40.715 ****** 2026-01-01 01:01:46.596045 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.596052 | orchestrator | 2026-01-01 01:01:46.596059 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-01 01:01:46.596067 | orchestrator | Thursday 01 January 2026 00:54:09 +0000 (0:00:01.270) 0:04:41.986 ****** 2026-01-01 01:01:46.596075 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.596082 | orchestrator | 2026-01-01 01:01:46.596089 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-01 01:01:46.596097 | orchestrator | Thursday 01 January 2026 00:54:10 +0000 (0:00:01.424) 0:04:43.410 ****** 2026-01-01 01:01:46.596104 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 01:01:46.596112 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.596120 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.596127 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 01:01:46.596135 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 01:01:46.596143 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-01 01:01:46.596151 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 01:01:46.596158 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-01 01:01:46.596167 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-01 01:01:46.596175 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-01 01:01:46.596182 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 01:01:46.596190 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-01 01:01:46.596198 | orchestrator | 2026-01-01 01:01:46.596206 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-01 01:01:46.596213 | orchestrator | Thursday 01 January 2026 00:54:15 +0000 (0:00:04.348) 0:04:47.759 ****** 2026-01-01 01:01:46.596221 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.596229 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.596236 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.596244 | orchestrator | 2026-01-01 01:01:46.596251 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-01 01:01:46.596259 | orchestrator | Thursday 01 January 2026 00:54:16 +0000 (0:00:01.882) 0:04:49.641 ****** 2026-01-01 01:01:46.596268 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.596276 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.596284 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.596291 | orchestrator | 2026-01-01 01:01:46.596299 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-01 01:01:46.596308 | orchestrator | Thursday 01 January 2026 00:54:17 +0000 (0:00:00.902) 0:04:50.543 ****** 2026-01-01 01:01:46.596315 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.596323 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.596330 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.596337 | orchestrator | 2026-01-01 01:01:46.596344 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-01 01:01:46.596351 | orchestrator | Thursday 01 January 2026 00:54:18 +0000 (0:00:00.398) 0:04:50.942 ****** 2026-01-01 01:01:46.596358 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.596416 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.596427 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.596482 | orchestrator | 2026-01-01 01:01:46.596493 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-01 01:01:46.596501 | orchestrator | Thursday 01 January 2026 00:54:20 +0000 (0:00:02.653) 0:04:53.595 ****** 2026-01-01 01:01:46.596568 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.596579 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.596584 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.596589 | orchestrator | 2026-01-01 01:01:46.596594 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-01 01:01:46.596599 | orchestrator | Thursday 01 January 2026 00:54:22 +0000 (0:00:01.444) 0:04:55.040 ****** 2026-01-01 01:01:46.596604 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.596608 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.596613 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.596618 | orchestrator | 2026-01-01 01:01:46.596623 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-01 01:01:46.596627 | orchestrator | Thursday 01 January 2026 00:54:22 +0000 (0:00:00.293) 0:04:55.333 ****** 2026-01-01 01:01:46.596632 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-01-01 01:01:46.596637 | orchestrator | 2026-01-01 01:01:46.596642 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-01 01:01:46.596647 | orchestrator | Thursday 01 January 2026 00:54:23 +0000 (0:00:00.705) 0:04:56.039 ****** 2026-01-01 01:01:46.596652 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.596657 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.596661 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.596666 | orchestrator | 2026-01-01 01:01:46.596671 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-01 01:01:46.596676 | orchestrator | Thursday 01 January 2026 00:54:23 +0000 (0:00:00.342) 0:04:56.381 ****** 2026-01-01 01:01:46.596681 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.596686 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.596690 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.596695 | orchestrator | 2026-01-01 01:01:46.596707 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-01 01:01:46.596712 | orchestrator | Thursday 01 January 2026 00:54:24 +0000 (0:00:00.439) 0:04:56.820 ****** 2026-01-01 01:01:46.596716 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.596721 | orchestrator | 2026-01-01 01:01:46.596726 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-01 01:01:46.596731 | orchestrator | Thursday 01 January 2026 00:54:24 +0000 (0:00:00.769) 0:04:57.590 ****** 2026-01-01 01:01:46.596736 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.596742 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.596750 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.596758 | orchestrator | 2026-01-01 01:01:46.596766 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-01 01:01:46.596774 | orchestrator | Thursday 01 January 2026 00:54:26 +0000 (0:00:01.965) 0:04:59.555 ****** 2026-01-01 01:01:46.596782 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.596789 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.596797 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.596806 | orchestrator | 2026-01-01 01:01:46.596814 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-01 01:01:46.596819 | orchestrator | Thursday 01 January 2026 00:54:28 +0000 (0:00:01.402) 0:05:00.958 ****** 2026-01-01 01:01:46.596824 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.596829 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.596834 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.596839 | orchestrator | 2026-01-01 01:01:46.596843 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-01 01:01:46.596848 | orchestrator | Thursday 01 January 2026 00:54:30 +0000 (0:00:02.171) 0:05:03.129 ****** 2026-01-01 01:01:46.596853 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.596858 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.596863 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.596872 | orchestrator | 2026-01-01 01:01:46.596877 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-01 01:01:46.596882 | orchestrator | Thursday 01 January 2026 00:54:33 +0000 (0:00:02.708) 0:05:05.837 ****** 2026-01-01 01:01:46.596887 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.596892 | orchestrator | 2026-01-01 01:01:46.596897 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-01 01:01:46.596901 | orchestrator | Thursday 01 January 2026 00:54:33 +0000 (0:00:00.597) 0:05:06.435 ****** 2026-01-01 01:01:46.596906 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-01 01:01:46.596911 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.596916 | orchestrator | 2026-01-01 01:01:46.596921 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-01 01:01:46.596926 | orchestrator | Thursday 01 January 2026 00:54:56 +0000 (0:00:22.233) 0:05:28.669 ****** 2026-01-01 01:01:46.596931 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.596936 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.596940 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.596945 | orchestrator | 2026-01-01 01:01:46.596950 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-01 01:01:46.596955 | orchestrator | Thursday 01 January 2026 00:55:05 +0000 (0:00:09.397) 0:05:38.067 ****** 2026-01-01 01:01:46.596960 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.596965 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.596970 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.596975 | orchestrator | 2026-01-01 01:01:46.596980 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-01 01:01:46.597013 | orchestrator | Thursday 01 January 2026 00:55:06 +0000 (0:00:00.597) 0:05:38.664 ****** 2026-01-01 01:01:46.597020 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d66bb62230ec922aececc23f0b86ee60034cef33'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-01 01:01:46.597028 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d66bb62230ec922aececc23f0b86ee60034cef33'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-01 01:01:46.597034 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d66bb62230ec922aececc23f0b86ee60034cef33'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-01 01:01:46.597043 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d66bb62230ec922aececc23f0b86ee60034cef33'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-01 01:01:46.597048 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d66bb62230ec922aececc23f0b86ee60034cef33'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-01 01:01:46.597061 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d66bb62230ec922aececc23f0b86ee60034cef33'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d66bb62230ec922aececc23f0b86ee60034cef33'}])  2026-01-01 01:01:46.597068 | orchestrator | 2026-01-01 01:01:46.597072 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 01:01:46.597077 | orchestrator | Thursday 01 January 2026 00:55:21 +0000 (0:00:15.362) 0:05:54.027 ****** 2026-01-01 01:01:46.597082 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597086 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597091 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597096 | orchestrator | 2026-01-01 01:01:46.597100 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-01 01:01:46.597105 | orchestrator | Thursday 01 January 2026 00:55:21 +0000 (0:00:00.435) 0:05:54.463 ****** 2026-01-01 01:01:46.597110 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.597114 | orchestrator | 2026-01-01 01:01:46.597119 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-01 01:01:46.597123 | orchestrator | Thursday 01 January 2026 00:55:22 +0000 (0:00:01.076) 0:05:55.539 ****** 2026-01-01 01:01:46.597128 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.597132 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.597137 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.597141 | orchestrator | 2026-01-01 01:01:46.597146 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-01 01:01:46.597151 | orchestrator | Thursday 01 January 2026 00:55:23 +0000 (0:00:00.336) 0:05:55.876 ****** 2026-01-01 01:01:46.597155 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597160 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597164 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597169 | orchestrator | 2026-01-01 01:01:46.597173 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-01 01:01:46.597178 | orchestrator | Thursday 01 January 2026 00:55:23 +0000 (0:00:00.447) 0:05:56.323 ****** 2026-01-01 01:01:46.597183 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 01:01:46.597187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 01:01:46.597192 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 01:01:46.597197 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597201 | orchestrator | 2026-01-01 01:01:46.597206 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-01 01:01:46.597211 | orchestrator | Thursday 01 January 2026 00:55:24 +0000 (0:00:00.938) 0:05:57.262 ****** 2026-01-01 01:01:46.597215 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.597220 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.597241 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.597249 | orchestrator | 2026-01-01 01:01:46.597257 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-01 01:01:46.597265 | orchestrator | 2026-01-01 01:01:46.597272 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 01:01:46.597279 | orchestrator | Thursday 01 January 2026 00:55:25 +0000 (0:00:00.830) 0:05:58.092 ****** 2026-01-01 01:01:46.597286 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.597293 | orchestrator | 2026-01-01 01:01:46.597301 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 01:01:46.597307 | orchestrator | Thursday 01 January 2026 00:55:25 +0000 (0:00:00.538) 0:05:58.631 ****** 2026-01-01 01:01:46.597314 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.597326 | orchestrator | 2026-01-01 01:01:46.597333 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 01:01:46.597340 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:00.877) 0:05:59.508 ****** 2026-01-01 01:01:46.597347 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.597355 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.597363 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.597370 | orchestrator | 2026-01-01 01:01:46.597378 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 01:01:46.597386 | orchestrator | Thursday 01 January 2026 00:55:27 +0000 (0:00:00.832) 0:06:00.341 ****** 2026-01-01 01:01:46.597393 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597400 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597408 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597415 | orchestrator | 2026-01-01 01:01:46.597422 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 01:01:46.597429 | orchestrator | Thursday 01 January 2026 00:55:28 +0000 (0:00:00.404) 0:06:00.745 ****** 2026-01-01 01:01:46.597456 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597468 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597477 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597484 | orchestrator | 2026-01-01 01:01:46.597491 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 01:01:46.597499 | orchestrator | Thursday 01 January 2026 00:55:28 +0000 (0:00:00.375) 0:06:01.121 ****** 2026-01-01 01:01:46.597505 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597512 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597520 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597527 | orchestrator | 2026-01-01 01:01:46.597533 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 01:01:46.597541 | orchestrator | Thursday 01 January 2026 00:55:29 +0000 (0:00:00.621) 0:06:01.743 ****** 2026-01-01 01:01:46.597549 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.597557 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.597565 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.597572 | orchestrator | 2026-01-01 01:01:46.597580 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 01:01:46.597587 | orchestrator | Thursday 01 January 2026 00:55:30 +0000 (0:00:00.946) 0:06:02.689 ****** 2026-01-01 01:01:46.597591 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597596 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597601 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597605 | orchestrator | 2026-01-01 01:01:46.597610 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 01:01:46.597614 | orchestrator | Thursday 01 January 2026 00:55:30 +0000 (0:00:00.357) 0:06:03.046 ****** 2026-01-01 01:01:46.597619 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597623 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597628 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597632 | orchestrator | 2026-01-01 01:01:46.597637 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 01:01:46.597642 | orchestrator | Thursday 01 January 2026 00:55:30 +0000 (0:00:00.385) 0:06:03.432 ****** 2026-01-01 01:01:46.597646 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.597651 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.597655 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.597660 | orchestrator | 2026-01-01 01:01:46.597664 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 01:01:46.597669 | orchestrator | Thursday 01 January 2026 00:55:31 +0000 (0:00:01.191) 0:06:04.624 ****** 2026-01-01 01:01:46.597674 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.597678 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.597683 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.597692 | orchestrator | 2026-01-01 01:01:46.597697 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 01:01:46.597702 | orchestrator | Thursday 01 January 2026 00:55:32 +0000 (0:00:00.916) 0:06:05.541 ****** 2026-01-01 01:01:46.597706 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597711 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597716 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597720 | orchestrator | 2026-01-01 01:01:46.597725 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 01:01:46.597729 | orchestrator | Thursday 01 January 2026 00:55:33 +0000 (0:00:00.388) 0:06:05.929 ****** 2026-01-01 01:01:46.597734 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.597739 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.597743 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.597748 | orchestrator | 2026-01-01 01:01:46.597752 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 01:01:46.597757 | orchestrator | Thursday 01 January 2026 00:55:33 +0000 (0:00:00.463) 0:06:06.392 ****** 2026-01-01 01:01:46.597761 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597766 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597771 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597775 | orchestrator | 2026-01-01 01:01:46.597780 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 01:01:46.597812 | orchestrator | Thursday 01 January 2026 00:55:34 +0000 (0:00:00.662) 0:06:07.055 ****** 2026-01-01 01:01:46.597817 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597822 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597827 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597831 | orchestrator | 2026-01-01 01:01:46.597836 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 01:01:46.597841 | orchestrator | Thursday 01 January 2026 00:55:34 +0000 (0:00:00.362) 0:06:07.417 ****** 2026-01-01 01:01:46.597845 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597850 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597854 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597859 | orchestrator | 2026-01-01 01:01:46.597863 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 01:01:46.597868 | orchestrator | Thursday 01 January 2026 00:55:35 +0000 (0:00:00.316) 0:06:07.734 ****** 2026-01-01 01:01:46.597873 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597877 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597882 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597886 | orchestrator | 2026-01-01 01:01:46.597891 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 01:01:46.597895 | orchestrator | Thursday 01 January 2026 00:55:35 +0000 (0:00:00.324) 0:06:08.059 ****** 2026-01-01 01:01:46.597900 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.597905 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.597910 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.597914 | orchestrator | 2026-01-01 01:01:46.597919 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 01:01:46.597923 | orchestrator | Thursday 01 January 2026 00:55:35 +0000 (0:00:00.515) 0:06:08.574 ****** 2026-01-01 01:01:46.597928 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.597933 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.597937 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.597942 | orchestrator | 2026-01-01 01:01:46.597946 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 01:01:46.597951 | orchestrator | Thursday 01 January 2026 00:55:36 +0000 (0:00:00.303) 0:06:08.877 ****** 2026-01-01 01:01:46.597956 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.597960 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.597965 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.597969 | orchestrator | 2026-01-01 01:01:46.597978 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 01:01:46.597986 | orchestrator | Thursday 01 January 2026 00:55:36 +0000 (0:00:00.274) 0:06:09.152 ****** 2026-01-01 01:01:46.597991 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.597995 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.598000 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.598004 | orchestrator | 2026-01-01 01:01:46.598009 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-01 01:01:46.598035 | orchestrator | Thursday 01 January 2026 00:55:37 +0000 (0:00:00.518) 0:06:09.670 ****** 2026-01-01 01:01:46.598040 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-01 01:01:46.598045 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 01:01:46.598049 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 01:01:46.598054 | orchestrator | 2026-01-01 01:01:46.598059 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-01 01:01:46.598064 | orchestrator | Thursday 01 January 2026 00:55:37 +0000 (0:00:00.936) 0:06:10.607 ****** 2026-01-01 01:01:46.598068 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.598073 | orchestrator | 2026-01-01 01:01:46.598078 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-01 01:01:46.598082 | orchestrator | Thursday 01 January 2026 00:55:38 +0000 (0:00:00.470) 0:06:11.077 ****** 2026-01-01 01:01:46.598087 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.598092 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.598096 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.598101 | orchestrator | 2026-01-01 01:01:46.598105 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-01 01:01:46.598110 | orchestrator | Thursday 01 January 2026 00:55:39 +0000 (0:00:00.709) 0:06:11.786 ****** 2026-01-01 01:01:46.598115 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.598119 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.598124 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.598128 | orchestrator | 2026-01-01 01:01:46.598133 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-01 01:01:46.598138 | orchestrator | Thursday 01 January 2026 00:55:39 +0000 (0:00:00.443) 0:06:12.230 ****** 2026-01-01 01:01:46.598142 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 01:01:46.598147 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 01:01:46.598152 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 01:01:46.598156 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-01 01:01:46.598161 | orchestrator | 2026-01-01 01:01:46.598165 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-01 01:01:46.598170 | orchestrator | Thursday 01 January 2026 00:55:50 +0000 (0:00:10.882) 0:06:23.113 ****** 2026-01-01 01:01:46.598175 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.598179 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.598184 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.598189 | orchestrator | 2026-01-01 01:01:46.598193 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-01 01:01:46.598198 | orchestrator | Thursday 01 January 2026 00:55:50 +0000 (0:00:00.490) 0:06:23.604 ****** 2026-01-01 01:01:46.598202 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-01 01:01:46.598207 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-01 01:01:46.598212 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-01 01:01:46.598216 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-01 01:01:46.598221 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.598242 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.598247 | orchestrator | 2026-01-01 01:01:46.598252 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-01 01:01:46.598261 | orchestrator | Thursday 01 January 2026 00:55:53 +0000 (0:00:02.606) 0:06:26.211 ****** 2026-01-01 01:01:46.598266 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-01 01:01:46.598271 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-01 01:01:46.598275 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-01 01:01:46.598280 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 01:01:46.598285 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-01 01:01:46.598289 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-01 01:01:46.598294 | orchestrator | 2026-01-01 01:01:46.598298 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-01 01:01:46.598303 | orchestrator | Thursday 01 January 2026 00:55:54 +0000 (0:00:01.412) 0:06:27.623 ****** 2026-01-01 01:01:46.598308 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.598316 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.598324 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.598331 | orchestrator | 2026-01-01 01:01:46.598340 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-01 01:01:46.598348 | orchestrator | Thursday 01 January 2026 00:55:56 +0000 (0:00:01.043) 0:06:28.667 ****** 2026-01-01 01:01:46.598355 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.598363 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.598372 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.598380 | orchestrator | 2026-01-01 01:01:46.598388 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-01 01:01:46.598395 | orchestrator | Thursday 01 January 2026 00:55:56 +0000 (0:00:00.357) 0:06:29.025 ****** 2026-01-01 01:01:46.598400 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.598405 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.598409 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.598414 | orchestrator | 2026-01-01 01:01:46.598418 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-01 01:01:46.598427 | orchestrator | Thursday 01 January 2026 00:55:56 +0000 (0:00:00.385) 0:06:29.410 ****** 2026-01-01 01:01:46.598450 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.598459 | orchestrator | 2026-01-01 01:01:46.598467 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-01 01:01:46.598474 | orchestrator | Thursday 01 January 2026 00:55:57 +0000 (0:00:00.562) 0:06:29.972 ****** 2026-01-01 01:01:46.598481 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.598489 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.598495 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.598502 | orchestrator | 2026-01-01 01:01:46.598510 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-01 01:01:46.598517 | orchestrator | Thursday 01 January 2026 00:55:58 +0000 (0:00:00.677) 0:06:30.650 ****** 2026-01-01 01:01:46.598525 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.598533 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.598541 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.598549 | orchestrator | 2026-01-01 01:01:46.598557 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-01 01:01:46.598564 | orchestrator | Thursday 01 January 2026 00:55:58 +0000 (0:00:00.331) 0:06:30.981 ****** 2026-01-01 01:01:46.598572 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.598580 | orchestrator | 2026-01-01 01:01:46.598588 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-01 01:01:46.598596 | orchestrator | Thursday 01 January 2026 00:55:59 +0000 (0:00:00.713) 0:06:31.695 ****** 2026-01-01 01:01:46.598603 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.598611 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.598618 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.598634 | orchestrator | 2026-01-01 01:01:46.598642 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-01 01:01:46.598649 | orchestrator | Thursday 01 January 2026 00:56:00 +0000 (0:00:01.861) 0:06:33.557 ****** 2026-01-01 01:01:46.598655 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.598663 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.598670 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.598676 | orchestrator | 2026-01-01 01:01:46.598683 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-01 01:01:46.598691 | orchestrator | Thursday 01 January 2026 00:56:02 +0000 (0:00:01.720) 0:06:35.278 ****** 2026-01-01 01:01:46.598698 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.598705 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.598712 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.598720 | orchestrator | 2026-01-01 01:01:46.598728 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-01 01:01:46.598736 | orchestrator | Thursday 01 January 2026 00:56:04 +0000 (0:00:02.026) 0:06:37.304 ****** 2026-01-01 01:01:46.598744 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.598751 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.598758 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.598766 | orchestrator | 2026-01-01 01:01:46.598774 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-01 01:01:46.598781 | orchestrator | Thursday 01 January 2026 00:56:07 +0000 (0:00:02.878) 0:06:40.183 ****** 2026-01-01 01:01:46.598788 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.598795 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.598801 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-01 01:01:46.598808 | orchestrator | 2026-01-01 01:01:46.598816 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-01 01:01:46.598824 | orchestrator | Thursday 01 January 2026 00:56:08 +0000 (0:00:00.524) 0:06:40.708 ****** 2026-01-01 01:01:46.598863 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-01 01:01:46.598872 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-01 01:01:46.598879 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-01 01:01:46.598886 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-01 01:01:46.598893 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-01 01:01:46.598901 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-01 01:01:46.598908 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-01 01:01:46.598915 | orchestrator | 2026-01-01 01:01:46.598922 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-01 01:01:46.598929 | orchestrator | Thursday 01 January 2026 00:56:44 +0000 (0:00:36.205) 0:07:16.913 ****** 2026-01-01 01:01:46.598935 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-01 01:01:46.598942 | orchestrator | 2026-01-01 01:01:46.598948 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-01 01:01:46.598955 | orchestrator | Thursday 01 January 2026 00:56:45 +0000 (0:00:01.385) 0:07:18.299 ****** 2026-01-01 01:01:46.598962 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.598969 | orchestrator | 2026-01-01 01:01:46.598975 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-01 01:01:46.598981 | orchestrator | Thursday 01 January 2026 00:56:45 +0000 (0:00:00.315) 0:07:18.615 ****** 2026-01-01 01:01:46.598988 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.598995 | orchestrator | 2026-01-01 01:01:46.599002 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-01 01:01:46.599018 | orchestrator | Thursday 01 January 2026 00:56:46 +0000 (0:00:00.155) 0:07:18.771 ****** 2026-01-01 01:01:46.599030 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-01 01:01:46.599039 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-01 01:01:46.599046 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-01 01:01:46.599053 | orchestrator | 2026-01-01 01:01:46.599060 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-01 01:01:46.599067 | orchestrator | Thursday 01 January 2026 00:56:52 +0000 (0:00:06.768) 0:07:25.539 ****** 2026-01-01 01:01:46.599075 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-01 01:01:46.599082 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-01 01:01:46.599089 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-01 01:01:46.599096 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-01 01:01:46.599103 | orchestrator | 2026-01-01 01:01:46.599110 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 01:01:46.599117 | orchestrator | Thursday 01 January 2026 00:56:58 +0000 (0:00:05.329) 0:07:30.869 ****** 2026-01-01 01:01:46.599124 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.599131 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.599137 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.599144 | orchestrator | 2026-01-01 01:01:46.599151 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-01 01:01:46.599158 | orchestrator | Thursday 01 January 2026 00:56:59 +0000 (0:00:00.780) 0:07:31.650 ****** 2026-01-01 01:01:46.599165 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.599172 | orchestrator | 2026-01-01 01:01:46.599179 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-01 01:01:46.599186 | orchestrator | Thursday 01 January 2026 00:56:59 +0000 (0:00:00.542) 0:07:32.192 ****** 2026-01-01 01:01:46.599193 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.599200 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.599207 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.599214 | orchestrator | 2026-01-01 01:01:46.599222 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-01 01:01:46.599228 | orchestrator | Thursday 01 January 2026 00:57:00 +0000 (0:00:00.636) 0:07:32.829 ****** 2026-01-01 01:01:46.599235 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.599242 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.599249 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.599257 | orchestrator | 2026-01-01 01:01:46.599264 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-01 01:01:46.599271 | orchestrator | Thursday 01 January 2026 00:57:01 +0000 (0:00:01.300) 0:07:34.130 ****** 2026-01-01 01:01:46.599278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 01:01:46.599285 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 01:01:46.599293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 01:01:46.599300 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.599307 | orchestrator | 2026-01-01 01:01:46.599314 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-01 01:01:46.599321 | orchestrator | Thursday 01 January 2026 00:57:02 +0000 (0:00:00.642) 0:07:34.772 ****** 2026-01-01 01:01:46.599328 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.599335 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.599343 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.599349 | orchestrator | 2026-01-01 01:01:46.599357 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-01 01:01:46.599373 | orchestrator | 2026-01-01 01:01:46.599381 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 01:01:46.599432 | orchestrator | Thursday 01 January 2026 00:57:02 +0000 (0:00:00.621) 0:07:35.393 ****** 2026-01-01 01:01:46.599463 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.599470 | orchestrator | 2026-01-01 01:01:46.599478 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 01:01:46.599485 | orchestrator | Thursday 01 January 2026 00:57:03 +0000 (0:00:00.806) 0:07:36.200 ****** 2026-01-01 01:01:46.599493 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.599501 | orchestrator | 2026-01-01 01:01:46.599509 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 01:01:46.599516 | orchestrator | Thursday 01 January 2026 00:57:04 +0000 (0:00:00.547) 0:07:36.747 ****** 2026-01-01 01:01:46.599523 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.599531 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.599539 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.599547 | orchestrator | 2026-01-01 01:01:46.599554 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 01:01:46.599562 | orchestrator | Thursday 01 January 2026 00:57:04 +0000 (0:00:00.610) 0:07:37.357 ****** 2026-01-01 01:01:46.599570 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.599578 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.599586 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.599594 | orchestrator | 2026-01-01 01:01:46.599602 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 01:01:46.599609 | orchestrator | Thursday 01 January 2026 00:57:05 +0000 (0:00:00.934) 0:07:38.292 ****** 2026-01-01 01:01:46.599617 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.599625 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.599632 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.599639 | orchestrator | 2026-01-01 01:01:46.599647 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 01:01:46.599654 | orchestrator | Thursday 01 January 2026 00:57:06 +0000 (0:00:00.742) 0:07:39.035 ****** 2026-01-01 01:01:46.599668 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.599676 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.599684 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.599691 | orchestrator | 2026-01-01 01:01:46.599699 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 01:01:46.599706 | orchestrator | Thursday 01 January 2026 00:57:07 +0000 (0:00:00.717) 0:07:39.752 ****** 2026-01-01 01:01:46.599714 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.599722 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.599729 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.599737 | orchestrator | 2026-01-01 01:01:46.599745 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 01:01:46.599753 | orchestrator | Thursday 01 January 2026 00:57:07 +0000 (0:00:00.629) 0:07:40.382 ****** 2026-01-01 01:01:46.599761 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.599768 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.599776 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.599783 | orchestrator | 2026-01-01 01:01:46.599791 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 01:01:46.599799 | orchestrator | Thursday 01 January 2026 00:57:08 +0000 (0:00:00.310) 0:07:40.692 ****** 2026-01-01 01:01:46.599806 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.599814 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.599821 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.599828 | orchestrator | 2026-01-01 01:01:46.599837 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 01:01:46.599845 | orchestrator | Thursday 01 January 2026 00:57:08 +0000 (0:00:00.346) 0:07:41.039 ****** 2026-01-01 01:01:46.599861 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.599868 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.599877 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.599885 | orchestrator | 2026-01-01 01:01:46.599893 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 01:01:46.599901 | orchestrator | Thursday 01 January 2026 00:57:09 +0000 (0:00:00.723) 0:07:41.763 ****** 2026-01-01 01:01:46.599908 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.599916 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.599924 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.599932 | orchestrator | 2026-01-01 01:01:46.599940 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 01:01:46.599948 | orchestrator | Thursday 01 January 2026 00:57:10 +0000 (0:00:01.058) 0:07:42.822 ****** 2026-01-01 01:01:46.599955 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.599964 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.599971 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.599979 | orchestrator | 2026-01-01 01:01:46.599986 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 01:01:46.599995 | orchestrator | Thursday 01 January 2026 00:57:10 +0000 (0:00:00.313) 0:07:43.136 ****** 2026-01-01 01:01:46.600003 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.600011 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.600018 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.600025 | orchestrator | 2026-01-01 01:01:46.600033 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 01:01:46.600041 | orchestrator | Thursday 01 January 2026 00:57:10 +0000 (0:00:00.335) 0:07:43.471 ****** 2026-01-01 01:01:46.600050 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.600058 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.600066 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.600073 | orchestrator | 2026-01-01 01:01:46.600081 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 01:01:46.600090 | orchestrator | Thursday 01 January 2026 00:57:11 +0000 (0:00:00.345) 0:07:43.816 ****** 2026-01-01 01:01:46.600098 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.600106 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.600114 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.600122 | orchestrator | 2026-01-01 01:01:46.600130 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 01:01:46.600184 | orchestrator | Thursday 01 January 2026 00:57:11 +0000 (0:00:00.668) 0:07:44.485 ****** 2026-01-01 01:01:46.600195 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.600202 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.600209 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.600216 | orchestrator | 2026-01-01 01:01:46.600223 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 01:01:46.600231 | orchestrator | Thursday 01 January 2026 00:57:12 +0000 (0:00:00.383) 0:07:44.869 ****** 2026-01-01 01:01:46.600238 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.600245 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.600252 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.600259 | orchestrator | 2026-01-01 01:01:46.600266 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 01:01:46.600274 | orchestrator | Thursday 01 January 2026 00:57:12 +0000 (0:00:00.310) 0:07:45.179 ****** 2026-01-01 01:01:46.600281 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.600289 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.600296 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.600303 | orchestrator | 2026-01-01 01:01:46.600309 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 01:01:46.600316 | orchestrator | Thursday 01 January 2026 00:57:12 +0000 (0:00:00.313) 0:07:45.493 ****** 2026-01-01 01:01:46.600324 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.600340 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.600347 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.600355 | orchestrator | 2026-01-01 01:01:46.600363 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 01:01:46.600370 | orchestrator | Thursday 01 January 2026 00:57:13 +0000 (0:00:00.601) 0:07:46.095 ****** 2026-01-01 01:01:46.600378 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.600385 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.600393 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.600400 | orchestrator | 2026-01-01 01:01:46.600406 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 01:01:46.600413 | orchestrator | Thursday 01 January 2026 00:57:13 +0000 (0:00:00.368) 0:07:46.464 ****** 2026-01-01 01:01:46.600420 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.600427 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.600560 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.600576 | orchestrator | 2026-01-01 01:01:46.600584 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-01 01:01:46.600593 | orchestrator | Thursday 01 January 2026 00:57:14 +0000 (0:00:00.585) 0:07:47.049 ****** 2026-01-01 01:01:46.600600 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.600608 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.600615 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.600623 | orchestrator | 2026-01-01 01:01:46.600631 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-01 01:01:46.600638 | orchestrator | Thursday 01 January 2026 00:57:15 +0000 (0:00:00.750) 0:07:47.800 ****** 2026-01-01 01:01:46.600646 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 01:01:46.600654 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 01:01:46.600660 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 01:01:46.600665 | orchestrator | 2026-01-01 01:01:46.600669 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-01 01:01:46.600674 | orchestrator | Thursday 01 January 2026 00:57:15 +0000 (0:00:00.727) 0:07:48.527 ****** 2026-01-01 01:01:46.600679 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.600684 | orchestrator | 2026-01-01 01:01:46.600689 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-01 01:01:46.600693 | orchestrator | Thursday 01 January 2026 00:57:16 +0000 (0:00:00.588) 0:07:49.115 ****** 2026-01-01 01:01:46.600698 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.600703 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.600707 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.600712 | orchestrator | 2026-01-01 01:01:46.600716 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-01 01:01:46.600721 | orchestrator | Thursday 01 January 2026 00:57:16 +0000 (0:00:00.321) 0:07:49.437 ****** 2026-01-01 01:01:46.600726 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.600730 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.600735 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.600740 | orchestrator | 2026-01-01 01:01:46.600744 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-01 01:01:46.600749 | orchestrator | Thursday 01 January 2026 00:57:17 +0000 (0:00:00.640) 0:07:50.077 ****** 2026-01-01 01:01:46.600753 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.600758 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.600763 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.600767 | orchestrator | 2026-01-01 01:01:46.600772 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-01 01:01:46.600776 | orchestrator | Thursday 01 January 2026 00:57:18 +0000 (0:00:00.603) 0:07:50.681 ****** 2026-01-01 01:01:46.600780 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.600793 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.600797 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.600801 | orchestrator | 2026-01-01 01:01:46.600806 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-01 01:01:46.600810 | orchestrator | Thursday 01 January 2026 00:57:18 +0000 (0:00:00.572) 0:07:51.254 ****** 2026-01-01 01:01:46.600814 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-01 01:01:46.600819 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-01 01:01:46.600823 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-01 01:01:46.600827 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-01 01:01:46.600839 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-01 01:01:46.600843 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-01 01:01:46.600848 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-01 01:01:46.600852 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-01 01:01:46.600856 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-01 01:01:46.600860 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-01 01:01:46.600864 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-01 01:01:46.600868 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-01 01:01:46.600872 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-01 01:01:46.600877 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-01 01:01:46.600881 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-01 01:01:46.600885 | orchestrator | 2026-01-01 01:01:46.600889 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-01 01:01:46.600893 | orchestrator | Thursday 01 January 2026 00:57:24 +0000 (0:00:05.470) 0:07:56.724 ****** 2026-01-01 01:01:46.600897 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.600901 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.600906 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.600910 | orchestrator | 2026-01-01 01:01:46.600914 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-01 01:01:46.600919 | orchestrator | Thursday 01 January 2026 00:57:24 +0000 (0:00:00.491) 0:07:57.216 ****** 2026-01-01 01:01:46.600926 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.600931 | orchestrator | 2026-01-01 01:01:46.600935 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-01 01:01:46.600939 | orchestrator | Thursday 01 January 2026 00:57:25 +0000 (0:00:00.472) 0:07:57.688 ****** 2026-01-01 01:01:46.600943 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-01 01:01:46.600947 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-01 01:01:46.600952 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-01 01:01:46.600956 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-01 01:01:46.600960 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-01 01:01:46.600964 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-01 01:01:46.600968 | orchestrator | 2026-01-01 01:01:46.600973 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-01 01:01:46.600977 | orchestrator | Thursday 01 January 2026 00:57:25 +0000 (0:00:00.930) 0:07:58.619 ****** 2026-01-01 01:01:46.600984 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.600989 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 01:01:46.600993 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 01:01:46.600997 | orchestrator | 2026-01-01 01:01:46.601001 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-01 01:01:46.601006 | orchestrator | Thursday 01 January 2026 00:57:28 +0000 (0:00:02.678) 0:08:01.297 ****** 2026-01-01 01:01:46.601010 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-01 01:01:46.601014 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 01:01:46.601018 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.601022 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-01 01:01:46.601026 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-01 01:01:46.601030 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.601034 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-01 01:01:46.601039 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-01 01:01:46.601043 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.601047 | orchestrator | 2026-01-01 01:01:46.601051 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-01 01:01:46.601055 | orchestrator | Thursday 01 January 2026 00:57:29 +0000 (0:00:01.346) 0:08:02.643 ****** 2026-01-01 01:01:46.601059 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 01:01:46.601064 | orchestrator | 2026-01-01 01:01:46.601068 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-01 01:01:46.601072 | orchestrator | Thursday 01 January 2026 00:57:32 +0000 (0:00:02.549) 0:08:05.193 ****** 2026-01-01 01:01:46.601076 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.601080 | orchestrator | 2026-01-01 01:01:46.601084 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-01 01:01:46.601088 | orchestrator | Thursday 01 January 2026 00:57:33 +0000 (0:00:00.516) 0:08:05.710 ****** 2026-01-01 01:01:46.601093 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fb51feb3-1d35-54b5-8ae7-a768510de0c7', 'data_vg': 'ceph-fb51feb3-1d35-54b5-8ae7-a768510de0c7'}) 2026-01-01 01:01:46.601097 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a4d57ac9-6a13-5c0f-8510-006bddd85eb9', 'data_vg': 'ceph-a4d57ac9-6a13-5c0f-8510-006bddd85eb9'}) 2026-01-01 01:01:46.601109 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d1d2d931-4931-5db0-ab2b-78163b321b95', 'data_vg': 'ceph-d1d2d931-4931-5db0-ab2b-78163b321b95'}) 2026-01-01 01:01:46.601113 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3a96b51d-dfb4-5025-9237-a7295cf21147', 'data_vg': 'ceph-3a96b51d-dfb4-5025-9237-a7295cf21147'}) 2026-01-01 01:01:46.601118 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef', 'data_vg': 'ceph-dcdeb624-ddaf-57fa-a1ad-1b7c62a928ef'}) 2026-01-01 01:01:46.601122 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf', 'data_vg': 'ceph-9b8df0c9-f78d-5986-b3c0-5cd01f50e7bf'}) 2026-01-01 01:01:46.601126 | orchestrator | 2026-01-01 01:01:46.601130 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-01 01:01:46.601134 | orchestrator | Thursday 01 January 2026 00:58:11 +0000 (0:00:38.271) 0:08:43.981 ****** 2026-01-01 01:01:46.601139 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601143 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.601147 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.601151 | orchestrator | 2026-01-01 01:01:46.601155 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-01 01:01:46.601159 | orchestrator | Thursday 01 January 2026 00:58:11 +0000 (0:00:00.366) 0:08:44.348 ****** 2026-01-01 01:01:46.601167 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.601171 | orchestrator | 2026-01-01 01:01:46.601175 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-01 01:01:46.601179 | orchestrator | Thursday 01 January 2026 00:58:12 +0000 (0:00:00.526) 0:08:44.874 ****** 2026-01-01 01:01:46.601184 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.601188 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.601192 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.601196 | orchestrator | 2026-01-01 01:01:46.601203 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-01 01:01:46.601207 | orchestrator | Thursday 01 January 2026 00:58:13 +0000 (0:00:01.048) 0:08:45.922 ****** 2026-01-01 01:01:46.601211 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.601215 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.601219 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.601223 | orchestrator | 2026-01-01 01:01:46.601227 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-01 01:01:46.601232 | orchestrator | Thursday 01 January 2026 00:58:15 +0000 (0:00:02.639) 0:08:48.561 ****** 2026-01-01 01:01:46.601236 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.601240 | orchestrator | 2026-01-01 01:01:46.601244 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-01 01:01:46.601248 | orchestrator | Thursday 01 January 2026 00:58:16 +0000 (0:00:00.535) 0:08:49.097 ****** 2026-01-01 01:01:46.601252 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.601256 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.601261 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.601265 | orchestrator | 2026-01-01 01:01:46.601269 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-01 01:01:46.601273 | orchestrator | Thursday 01 January 2026 00:58:17 +0000 (0:00:01.533) 0:08:50.631 ****** 2026-01-01 01:01:46.601277 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.601281 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.601285 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.601289 | orchestrator | 2026-01-01 01:01:46.601294 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-01 01:01:46.601298 | orchestrator | Thursday 01 January 2026 00:58:19 +0000 (0:00:01.237) 0:08:51.868 ****** 2026-01-01 01:01:46.601302 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.601306 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.601310 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.601315 | orchestrator | 2026-01-01 01:01:46.601319 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-01 01:01:46.601323 | orchestrator | Thursday 01 January 2026 00:58:20 +0000 (0:00:01.762) 0:08:53.631 ****** 2026-01-01 01:01:46.601327 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601331 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.601335 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.601339 | orchestrator | 2026-01-01 01:01:46.601343 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-01 01:01:46.601348 | orchestrator | Thursday 01 January 2026 00:58:21 +0000 (0:00:00.363) 0:08:53.994 ****** 2026-01-01 01:01:46.601352 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601356 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.601360 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.601364 | orchestrator | 2026-01-01 01:01:46.601368 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-01 01:01:46.601372 | orchestrator | Thursday 01 January 2026 00:58:21 +0000 (0:00:00.604) 0:08:54.599 ****** 2026-01-01 01:01:46.601376 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-01-01 01:01:46.601380 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-01-01 01:01:46.601388 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-01-01 01:01:46.601392 | orchestrator | ok: [testbed-node-3] => (item=2) 2026-01-01 01:01:46.601396 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-01 01:01:46.601401 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-01 01:01:46.601405 | orchestrator | 2026-01-01 01:01:46.601409 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-01 01:01:46.601413 | orchestrator | Thursday 01 January 2026 00:58:23 +0000 (0:00:01.101) 0:08:55.701 ****** 2026-01-01 01:01:46.601417 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-01 01:01:46.601421 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-01 01:01:46.601428 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-01 01:01:46.601453 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-01-01 01:01:46.601461 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-01-01 01:01:46.601468 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-01 01:01:46.601475 | orchestrator | 2026-01-01 01:01:46.601479 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-01 01:01:46.601484 | orchestrator | Thursday 01 January 2026 00:58:25 +0000 (0:00:02.305) 0:08:58.006 ****** 2026-01-01 01:01:46.601488 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-01 01:01:46.601492 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-01 01:01:46.601496 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-01 01:01:46.601500 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-01-01 01:01:46.601504 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-01-01 01:01:46.601508 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-01 01:01:46.601512 | orchestrator | 2026-01-01 01:01:46.601516 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-01 01:01:46.601520 | orchestrator | Thursday 01 January 2026 00:58:29 +0000 (0:00:03.830) 0:09:01.837 ****** 2026-01-01 01:01:46.601525 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601529 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.601533 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-01 01:01:46.601537 | orchestrator | 2026-01-01 01:01:46.601541 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-01 01:01:46.601545 | orchestrator | Thursday 01 January 2026 00:58:32 +0000 (0:00:03.422) 0:09:05.259 ****** 2026-01-01 01:01:46.601549 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601553 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.601557 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-01 01:01:46.601562 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-01 01:01:46.601566 | orchestrator | 2026-01-01 01:01:46.601570 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-01 01:01:46.601577 | orchestrator | Thursday 01 January 2026 00:58:45 +0000 (0:00:12.685) 0:09:17.944 ****** 2026-01-01 01:01:46.601581 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601585 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.601589 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.601593 | orchestrator | 2026-01-01 01:01:46.601597 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 01:01:46.601602 | orchestrator | Thursday 01 January 2026 00:58:46 +0000 (0:00:01.212) 0:09:19.157 ****** 2026-01-01 01:01:46.601606 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601610 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.601614 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.601618 | orchestrator | 2026-01-01 01:01:46.601622 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-01 01:01:46.601627 | orchestrator | Thursday 01 January 2026 00:58:46 +0000 (0:00:00.385) 0:09:19.543 ****** 2026-01-01 01:01:46.601631 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-01-01 01:01:46.601639 | orchestrator | 2026-01-01 01:01:46.601643 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-01 01:01:46.601647 | orchestrator | Thursday 01 January 2026 00:58:47 +0000 (0:00:00.696) 0:09:20.240 ****** 2026-01-01 01:01:46.601651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.601656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.601660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.601664 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601668 | orchestrator | 2026-01-01 01:01:46.601672 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-01 01:01:46.601676 | orchestrator | Thursday 01 January 2026 00:58:48 +0000 (0:00:00.814) 0:09:21.054 ****** 2026-01-01 01:01:46.601680 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601684 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.601688 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.601693 | orchestrator | 2026-01-01 01:01:46.601697 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-01 01:01:46.601701 | orchestrator | Thursday 01 January 2026 00:58:49 +0000 (0:00:00.636) 0:09:21.691 ****** 2026-01-01 01:01:46.601705 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601709 | orchestrator | 2026-01-01 01:01:46.601713 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-01 01:01:46.601717 | orchestrator | Thursday 01 January 2026 00:58:49 +0000 (0:00:00.258) 0:09:21.949 ****** 2026-01-01 01:01:46.601721 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601725 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.601729 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.601733 | orchestrator | 2026-01-01 01:01:46.601738 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-01 01:01:46.601742 | orchestrator | Thursday 01 January 2026 00:58:49 +0000 (0:00:00.331) 0:09:22.280 ****** 2026-01-01 01:01:46.601746 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601750 | orchestrator | 2026-01-01 01:01:46.601754 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-01 01:01:46.601758 | orchestrator | Thursday 01 January 2026 00:58:49 +0000 (0:00:00.265) 0:09:22.546 ****** 2026-01-01 01:01:46.601762 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601766 | orchestrator | 2026-01-01 01:01:46.601770 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-01 01:01:46.601775 | orchestrator | Thursday 01 January 2026 00:58:50 +0000 (0:00:00.268) 0:09:22.814 ****** 2026-01-01 01:01:46.601779 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601783 | orchestrator | 2026-01-01 01:01:46.601787 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-01 01:01:46.601791 | orchestrator | Thursday 01 January 2026 00:58:50 +0000 (0:00:00.158) 0:09:22.972 ****** 2026-01-01 01:01:46.601799 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601803 | orchestrator | 2026-01-01 01:01:46.601807 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-01 01:01:46.601812 | orchestrator | Thursday 01 January 2026 00:58:50 +0000 (0:00:00.235) 0:09:23.208 ****** 2026-01-01 01:01:46.601816 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601820 | orchestrator | 2026-01-01 01:01:46.601824 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-01 01:01:46.601828 | orchestrator | Thursday 01 January 2026 00:58:50 +0000 (0:00:00.225) 0:09:23.433 ****** 2026-01-01 01:01:46.601832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.601836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.601841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.601845 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601849 | orchestrator | 2026-01-01 01:01:46.601853 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-01 01:01:46.601861 | orchestrator | Thursday 01 January 2026 00:58:51 +0000 (0:00:01.054) 0:09:24.488 ****** 2026-01-01 01:01:46.601865 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601870 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.601874 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.601878 | orchestrator | 2026-01-01 01:01:46.601882 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-01 01:01:46.601886 | orchestrator | Thursday 01 January 2026 00:58:52 +0000 (0:00:00.385) 0:09:24.873 ****** 2026-01-01 01:01:46.601890 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601894 | orchestrator | 2026-01-01 01:01:46.601899 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-01 01:01:46.601903 | orchestrator | Thursday 01 January 2026 00:58:52 +0000 (0:00:00.258) 0:09:25.132 ****** 2026-01-01 01:01:46.601907 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601911 | orchestrator | 2026-01-01 01:01:46.601915 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-01 01:01:46.601919 | orchestrator | 2026-01-01 01:01:46.601926 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 01:01:46.601930 | orchestrator | Thursday 01 January 2026 00:58:53 +0000 (0:00:00.737) 0:09:25.869 ****** 2026-01-01 01:01:46.601935 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.601940 | orchestrator | 2026-01-01 01:01:46.601944 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 01:01:46.601949 | orchestrator | Thursday 01 January 2026 00:58:54 +0000 (0:00:01.333) 0:09:27.202 ****** 2026-01-01 01:01:46.601953 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.601957 | orchestrator | 2026-01-01 01:01:46.601961 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 01:01:46.601965 | orchestrator | Thursday 01 January 2026 00:58:55 +0000 (0:00:01.305) 0:09:28.507 ****** 2026-01-01 01:01:46.601970 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.601974 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.601978 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.601982 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.601986 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.601990 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.601994 | orchestrator | 2026-01-01 01:01:46.601999 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 01:01:46.602003 | orchestrator | Thursday 01 January 2026 00:58:57 +0000 (0:00:01.316) 0:09:29.824 ****** 2026-01-01 01:01:46.602007 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.602031 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.602037 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.602041 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.602045 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.602050 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.602054 | orchestrator | 2026-01-01 01:01:46.602058 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 01:01:46.602062 | orchestrator | Thursday 01 January 2026 00:58:57 +0000 (0:00:00.686) 0:09:30.510 ****** 2026-01-01 01:01:46.602067 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.602071 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.602075 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.602079 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.602083 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.602088 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.602092 | orchestrator | 2026-01-01 01:01:46.602096 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 01:01:46.602104 | orchestrator | Thursday 01 January 2026 00:58:58 +0000 (0:00:00.928) 0:09:31.439 ****** 2026-01-01 01:01:46.602108 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.602112 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.602116 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.602120 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.602125 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.602129 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.602133 | orchestrator | 2026-01-01 01:01:46.602137 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 01:01:46.602141 | orchestrator | Thursday 01 January 2026 00:58:59 +0000 (0:00:00.698) 0:09:32.137 ****** 2026-01-01 01:01:46.602145 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.602150 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.602154 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.602158 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.602162 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.602166 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.602170 | orchestrator | 2026-01-01 01:01:46.602174 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 01:01:46.602182 | orchestrator | Thursday 01 January 2026 00:59:00 +0000 (0:00:01.493) 0:09:33.630 ****** 2026-01-01 01:01:46.602187 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.602191 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.602195 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.602199 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.602203 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.602207 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.602211 | orchestrator | 2026-01-01 01:01:46.602216 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 01:01:46.602220 | orchestrator | Thursday 01 January 2026 00:59:01 +0000 (0:00:00.619) 0:09:34.250 ****** 2026-01-01 01:01:46.602224 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.602228 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.602232 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.602236 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.602240 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.602244 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.602249 | orchestrator | 2026-01-01 01:01:46.602253 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 01:01:46.602257 | orchestrator | Thursday 01 January 2026 00:59:02 +0000 (0:00:00.840) 0:09:35.090 ****** 2026-01-01 01:01:46.602261 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.602265 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.602269 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.602273 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.602277 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.602282 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.602286 | orchestrator | 2026-01-01 01:01:46.602290 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 01:01:46.602294 | orchestrator | Thursday 01 January 2026 00:59:03 +0000 (0:00:01.066) 0:09:36.157 ****** 2026-01-01 01:01:46.602298 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.602302 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.602306 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.602310 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.602314 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.602318 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.602323 | orchestrator | 2026-01-01 01:01:46.602327 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 01:01:46.602333 | orchestrator | Thursday 01 January 2026 00:59:04 +0000 (0:00:01.461) 0:09:37.619 ****** 2026-01-01 01:01:46.602338 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.602342 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.602346 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.602358 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.602362 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.602366 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.602370 | orchestrator | 2026-01-01 01:01:46.602375 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 01:01:46.602379 | orchestrator | Thursday 01 January 2026 00:59:05 +0000 (0:00:00.684) 0:09:38.303 ****** 2026-01-01 01:01:46.602383 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.602387 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.602391 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.602395 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.602400 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.602404 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.602408 | orchestrator | 2026-01-01 01:01:46.602412 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 01:01:46.602416 | orchestrator | Thursday 01 January 2026 00:59:06 +0000 (0:00:00.914) 0:09:39.218 ****** 2026-01-01 01:01:46.602420 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.602425 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.602429 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.602448 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.602453 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.602457 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.602461 | orchestrator | 2026-01-01 01:01:46.602465 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 01:01:46.602469 | orchestrator | Thursday 01 January 2026 00:59:07 +0000 (0:00:00.633) 0:09:39.851 ****** 2026-01-01 01:01:46.602473 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.602478 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.602482 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.602486 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.602490 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.602494 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.602498 | orchestrator | 2026-01-01 01:01:46.602502 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 01:01:46.602506 | orchestrator | Thursday 01 January 2026 00:59:08 +0000 (0:00:00.921) 0:09:40.773 ****** 2026-01-01 01:01:46.602511 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.602515 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.602519 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.602523 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.602527 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.602531 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.602535 | orchestrator | 2026-01-01 01:01:46.602539 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 01:01:46.602543 | orchestrator | Thursday 01 January 2026 00:59:08 +0000 (0:00:00.614) 0:09:41.388 ****** 2026-01-01 01:01:46.602548 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.602552 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.602556 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.602560 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.602564 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.602568 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.602572 | orchestrator | 2026-01-01 01:01:46.602576 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 01:01:46.602580 | orchestrator | Thursday 01 January 2026 00:59:09 +0000 (0:00:00.891) 0:09:42.279 ****** 2026-01-01 01:01:46.602585 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.602589 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.602593 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.602597 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:46.602601 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:46.602605 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:46.602612 | orchestrator | 2026-01-01 01:01:46.602616 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 01:01:46.602624 | orchestrator | Thursday 01 January 2026 00:59:10 +0000 (0:00:00.777) 0:09:43.057 ****** 2026-01-01 01:01:46.602628 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.602632 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.602636 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.602641 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.602645 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.602649 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.602653 | orchestrator | 2026-01-01 01:01:46.602657 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 01:01:46.602661 | orchestrator | Thursday 01 January 2026 00:59:11 +0000 (0:00:01.156) 0:09:44.213 ****** 2026-01-01 01:01:46.602666 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.602670 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.602674 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.602678 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.602682 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.602686 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.602690 | orchestrator | 2026-01-01 01:01:46.602695 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 01:01:46.602699 | orchestrator | Thursday 01 January 2026 00:59:12 +0000 (0:00:00.688) 0:09:44.902 ****** 2026-01-01 01:01:46.602703 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.602707 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.602711 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.602715 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.602719 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.602724 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.602728 | orchestrator | 2026-01-01 01:01:46.602732 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-01 01:01:46.602736 | orchestrator | Thursday 01 January 2026 00:59:13 +0000 (0:00:01.332) 0:09:46.235 ****** 2026-01-01 01:01:46.602740 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 01:01:46.602744 | orchestrator | 2026-01-01 01:01:46.602748 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-01 01:01:46.602753 | orchestrator | Thursday 01 January 2026 00:59:18 +0000 (0:00:04.426) 0:09:50.661 ****** 2026-01-01 01:01:46.602757 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 01:01:46.602761 | orchestrator | 2026-01-01 01:01:46.602770 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-01 01:01:46.602774 | orchestrator | Thursday 01 January 2026 00:59:20 +0000 (0:00:02.256) 0:09:52.918 ****** 2026-01-01 01:01:46.602778 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.602782 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.602786 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.602791 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.602795 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.602799 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.602803 | orchestrator | 2026-01-01 01:01:46.602807 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-01 01:01:46.602812 | orchestrator | Thursday 01 January 2026 00:59:22 +0000 (0:00:02.061) 0:09:54.979 ****** 2026-01-01 01:01:46.602816 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.602820 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.602824 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.602828 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.602833 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.602837 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.602841 | orchestrator | 2026-01-01 01:01:46.602845 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-01 01:01:46.602849 | orchestrator | Thursday 01 January 2026 00:59:23 +0000 (0:00:01.002) 0:09:55.982 ****** 2026-01-01 01:01:46.602857 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.602862 | orchestrator | 2026-01-01 01:01:46.602866 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-01 01:01:46.602871 | orchestrator | Thursday 01 January 2026 00:59:24 +0000 (0:00:01.335) 0:09:57.317 ****** 2026-01-01 01:01:46.602875 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.602879 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.602883 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.602887 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.602891 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.602895 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.602900 | orchestrator | 2026-01-01 01:01:46.602904 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-01 01:01:46.602908 | orchestrator | Thursday 01 January 2026 00:59:26 +0000 (0:00:01.881) 0:09:59.199 ****** 2026-01-01 01:01:46.602912 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.602916 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.602920 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.602925 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.602929 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.602933 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.602937 | orchestrator | 2026-01-01 01:01:46.602941 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-01 01:01:46.602945 | orchestrator | Thursday 01 January 2026 00:59:30 +0000 (0:00:03.511) 0:10:02.710 ****** 2026-01-01 01:01:46.602950 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:01:46.602954 | orchestrator | 2026-01-01 01:01:46.602958 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-01 01:01:46.602962 | orchestrator | Thursday 01 January 2026 00:59:31 +0000 (0:00:01.314) 0:10:04.025 ****** 2026-01-01 01:01:46.602966 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.602971 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.602975 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.602979 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.602983 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.602987 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.602991 | orchestrator | 2026-01-01 01:01:46.602995 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-01 01:01:46.603002 | orchestrator | Thursday 01 January 2026 00:59:32 +0000 (0:00:00.905) 0:10:04.930 ****** 2026-01-01 01:01:46.603007 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.603011 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.603015 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.603019 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:01:46.603023 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:01:46.603028 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:01:46.603032 | orchestrator | 2026-01-01 01:01:46.603036 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-01 01:01:46.603040 | orchestrator | Thursday 01 January 2026 00:59:34 +0000 (0:00:02.441) 0:10:07.372 ****** 2026-01-01 01:01:46.603044 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.603050 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.603057 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.603063 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:46.603070 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:46.603076 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:46.603082 | orchestrator | 2026-01-01 01:01:46.603089 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-01 01:01:46.603096 | orchestrator | 2026-01-01 01:01:46.603102 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 01:01:46.603109 | orchestrator | Thursday 01 January 2026 00:59:35 +0000 (0:00:01.223) 0:10:08.595 ****** 2026-01-01 01:01:46.603120 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.603127 | orchestrator | 2026-01-01 01:01:46.603133 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 01:01:46.603139 | orchestrator | Thursday 01 January 2026 00:59:36 +0000 (0:00:00.526) 0:10:09.121 ****** 2026-01-01 01:01:46.603146 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.603152 | orchestrator | 2026-01-01 01:01:46.603158 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 01:01:46.603169 | orchestrator | Thursday 01 January 2026 00:59:37 +0000 (0:00:00.795) 0:10:09.916 ****** 2026-01-01 01:01:46.603176 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.603182 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.603188 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.603195 | orchestrator | 2026-01-01 01:01:46.603202 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 01:01:46.603209 | orchestrator | Thursday 01 January 2026 00:59:37 +0000 (0:00:00.326) 0:10:10.243 ****** 2026-01-01 01:01:46.603215 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.603223 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.603228 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.603232 | orchestrator | 2026-01-01 01:01:46.603236 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 01:01:46.603240 | orchestrator | Thursday 01 January 2026 00:59:38 +0000 (0:00:00.721) 0:10:10.964 ****** 2026-01-01 01:01:46.603244 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.603249 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.603253 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.603257 | orchestrator | 2026-01-01 01:01:46.603261 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 01:01:46.603265 | orchestrator | Thursday 01 January 2026 00:59:39 +0000 (0:00:01.097) 0:10:12.062 ****** 2026-01-01 01:01:46.603269 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.603273 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.603277 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.603281 | orchestrator | 2026-01-01 01:01:46.603286 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 01:01:46.603290 | orchestrator | Thursday 01 January 2026 00:59:40 +0000 (0:00:00.741) 0:10:12.804 ****** 2026-01-01 01:01:46.603294 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.603298 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.603302 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.603306 | orchestrator | 2026-01-01 01:01:46.603310 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 01:01:46.603315 | orchestrator | Thursday 01 January 2026 00:59:40 +0000 (0:00:00.347) 0:10:13.152 ****** 2026-01-01 01:01:46.603319 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.603323 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.603327 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.603331 | orchestrator | 2026-01-01 01:01:46.603336 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 01:01:46.603340 | orchestrator | Thursday 01 January 2026 00:59:40 +0000 (0:00:00.329) 0:10:13.481 ****** 2026-01-01 01:01:46.603344 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.603348 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.603352 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.603356 | orchestrator | 2026-01-01 01:01:46.603360 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 01:01:46.603364 | orchestrator | Thursday 01 January 2026 00:59:41 +0000 (0:00:00.621) 0:10:14.102 ****** 2026-01-01 01:01:46.603368 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.603377 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.603381 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.603385 | orchestrator | 2026-01-01 01:01:46.603389 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 01:01:46.603393 | orchestrator | Thursday 01 January 2026 00:59:42 +0000 (0:00:00.773) 0:10:14.876 ****** 2026-01-01 01:01:46.603397 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.603402 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.603406 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.603410 | orchestrator | 2026-01-01 01:01:46.603414 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 01:01:46.603418 | orchestrator | Thursday 01 January 2026 00:59:42 +0000 (0:00:00.747) 0:10:15.623 ****** 2026-01-01 01:01:46.603422 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.603427 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.603431 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.603471 | orchestrator | 2026-01-01 01:01:46.603476 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 01:01:46.603485 | orchestrator | Thursday 01 January 2026 00:59:43 +0000 (0:00:00.352) 0:10:15.975 ****** 2026-01-01 01:01:46.603489 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.603494 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.603498 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.603502 | orchestrator | 2026-01-01 01:01:46.603506 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 01:01:46.603510 | orchestrator | Thursday 01 January 2026 00:59:43 +0000 (0:00:00.595) 0:10:16.571 ****** 2026-01-01 01:01:46.603514 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.603519 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.603523 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.603527 | orchestrator | 2026-01-01 01:01:46.603531 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 01:01:46.603535 | orchestrator | Thursday 01 January 2026 00:59:44 +0000 (0:00:00.345) 0:10:16.917 ****** 2026-01-01 01:01:46.603539 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.603544 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.603548 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.603552 | orchestrator | 2026-01-01 01:01:46.603556 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 01:01:46.603560 | orchestrator | Thursday 01 January 2026 00:59:44 +0000 (0:00:00.346) 0:10:17.263 ****** 2026-01-01 01:01:46.603564 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.603568 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.603573 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.603577 | orchestrator | 2026-01-01 01:01:46.603581 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 01:01:46.603585 | orchestrator | Thursday 01 January 2026 00:59:44 +0000 (0:00:00.342) 0:10:17.605 ****** 2026-01-01 01:01:46.603589 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.603593 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.603598 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.603602 | orchestrator | 2026-01-01 01:01:46.603606 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 01:01:46.603610 | orchestrator | Thursday 01 January 2026 00:59:45 +0000 (0:00:00.623) 0:10:18.228 ****** 2026-01-01 01:01:46.603614 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.603622 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.603626 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.603631 | orchestrator | 2026-01-01 01:01:46.603635 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 01:01:46.603639 | orchestrator | Thursday 01 January 2026 00:59:45 +0000 (0:00:00.329) 0:10:18.557 ****** 2026-01-01 01:01:46.603643 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.603647 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.603652 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.603697 | orchestrator | 2026-01-01 01:01:46.603705 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 01:01:46.603710 | orchestrator | Thursday 01 January 2026 00:59:46 +0000 (0:00:00.333) 0:10:18.891 ****** 2026-01-01 01:01:46.603714 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.603718 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.603722 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.603726 | orchestrator | 2026-01-01 01:01:46.603731 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 01:01:46.603735 | orchestrator | Thursday 01 January 2026 00:59:46 +0000 (0:00:00.354) 0:10:19.245 ****** 2026-01-01 01:01:46.603739 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.603743 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.603747 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.603751 | orchestrator | 2026-01-01 01:01:46.603755 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-01 01:01:46.603760 | orchestrator | Thursday 01 January 2026 00:59:47 +0000 (0:00:00.830) 0:10:20.076 ****** 2026-01-01 01:01:46.603764 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.603768 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.603772 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-01 01:01:46.603776 | orchestrator | 2026-01-01 01:01:46.603781 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-01 01:01:46.603785 | orchestrator | Thursday 01 January 2026 00:59:47 +0000 (0:00:00.401) 0:10:20.477 ****** 2026-01-01 01:01:46.603789 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 01:01:46.603793 | orchestrator | 2026-01-01 01:01:46.603797 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-01 01:01:46.603801 | orchestrator | Thursday 01 January 2026 00:59:50 +0000 (0:00:02.429) 0:10:22.907 ****** 2026-01-01 01:01:46.603807 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-01 01:01:46.603813 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.603817 | orchestrator | 2026-01-01 01:01:46.603821 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-01 01:01:46.603826 | orchestrator | Thursday 01 January 2026 00:59:50 +0000 (0:00:00.286) 0:10:23.193 ****** 2026-01-01 01:01:46.603831 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 01:01:46.603837 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 01:01:46.603842 | orchestrator | 2026-01-01 01:01:46.603849 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-01 01:01:46.603853 | orchestrator | Thursday 01 January 2026 00:59:59 +0000 (0:00:09.239) 0:10:32.433 ****** 2026-01-01 01:01:46.603858 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 01:01:46.603862 | orchestrator | 2026-01-01 01:01:46.603866 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-01 01:01:46.603870 | orchestrator | Thursday 01 January 2026 01:00:04 +0000 (0:00:04.688) 0:10:37.122 ****** 2026-01-01 01:01:46.603874 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-01-01 01:01:46.603878 | orchestrator | 2026-01-01 01:01:46.603882 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-01 01:01:46.603886 | orchestrator | Thursday 01 January 2026 01:00:05 +0000 (0:00:00.700) 0:10:37.822 ****** 2026-01-01 01:01:46.603894 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-01 01:01:46.603898 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-01 01:01:46.603902 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-01 01:01:46.603906 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-01 01:01:46.603911 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-01 01:01:46.603915 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-01 01:01:46.603919 | orchestrator | 2026-01-01 01:01:46.603923 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-01 01:01:46.603927 | orchestrator | Thursday 01 January 2026 01:00:06 +0000 (0:00:01.178) 0:10:39.001 ****** 2026-01-01 01:01:46.603931 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.603935 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 01:01:46.603940 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 01:01:46.603944 | orchestrator | 2026-01-01 01:01:46.603951 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-01 01:01:46.603955 | orchestrator | Thursday 01 January 2026 01:00:08 +0000 (0:00:02.339) 0:10:41.340 ****** 2026-01-01 01:01:46.603959 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-01 01:01:46.603963 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 01:01:46.603967 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.603971 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-01 01:01:46.603975 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-01 01:01:46.603980 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.603984 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-01 01:01:46.603988 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-01 01:01:46.603992 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.603996 | orchestrator | 2026-01-01 01:01:46.604001 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-01 01:01:46.604005 | orchestrator | Thursday 01 January 2026 01:00:10 +0000 (0:00:01.737) 0:10:43.078 ****** 2026-01-01 01:01:46.604009 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.604013 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.604017 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.604021 | orchestrator | 2026-01-01 01:01:46.604025 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-01 01:01:46.604030 | orchestrator | Thursday 01 January 2026 01:00:13 +0000 (0:00:02.980) 0:10:46.059 ****** 2026-01-01 01:01:46.604034 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604038 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.604042 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.604046 | orchestrator | 2026-01-01 01:01:46.604050 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-01 01:01:46.604053 | orchestrator | Thursday 01 January 2026 01:00:13 +0000 (0:00:00.330) 0:10:46.389 ****** 2026-01-01 01:01:46.604057 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.604061 | orchestrator | 2026-01-01 01:01:46.604065 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-01 01:01:46.604069 | orchestrator | Thursday 01 January 2026 01:00:14 +0000 (0:00:00.827) 0:10:47.217 ****** 2026-01-01 01:01:46.604072 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.604076 | orchestrator | 2026-01-01 01:01:46.604080 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-01 01:01:46.604084 | orchestrator | Thursday 01 January 2026 01:00:15 +0000 (0:00:00.566) 0:10:47.783 ****** 2026-01-01 01:01:46.604091 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.604095 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.604099 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.604102 | orchestrator | 2026-01-01 01:01:46.604106 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-01 01:01:46.604110 | orchestrator | Thursday 01 January 2026 01:00:16 +0000 (0:00:01.449) 0:10:49.232 ****** 2026-01-01 01:01:46.604114 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.604117 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.604121 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.604125 | orchestrator | 2026-01-01 01:01:46.604129 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-01 01:01:46.604133 | orchestrator | Thursday 01 January 2026 01:00:18 +0000 (0:00:01.650) 0:10:50.883 ****** 2026-01-01 01:01:46.604136 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.604140 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.604144 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.604148 | orchestrator | 2026-01-01 01:01:46.604152 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-01 01:01:46.604158 | orchestrator | Thursday 01 January 2026 01:00:20 +0000 (0:00:01.994) 0:10:52.878 ****** 2026-01-01 01:01:46.604162 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.604166 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.604170 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.604174 | orchestrator | 2026-01-01 01:01:46.604178 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-01 01:01:46.604181 | orchestrator | Thursday 01 January 2026 01:00:22 +0000 (0:00:02.070) 0:10:54.949 ****** 2026-01-01 01:01:46.604185 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604189 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604193 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604197 | orchestrator | 2026-01-01 01:01:46.604200 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 01:01:46.604204 | orchestrator | Thursday 01 January 2026 01:00:24 +0000 (0:00:02.037) 0:10:56.986 ****** 2026-01-01 01:01:46.604208 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.604212 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.604215 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.604219 | orchestrator | 2026-01-01 01:01:46.604223 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-01 01:01:46.604227 | orchestrator | Thursday 01 January 2026 01:00:25 +0000 (0:00:01.070) 0:10:58.056 ****** 2026-01-01 01:01:46.604231 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.604235 | orchestrator | 2026-01-01 01:01:46.604238 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-01 01:01:46.604242 | orchestrator | Thursday 01 January 2026 01:00:26 +0000 (0:00:01.056) 0:10:59.113 ****** 2026-01-01 01:01:46.604246 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604250 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604254 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604258 | orchestrator | 2026-01-01 01:01:46.604261 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-01 01:01:46.604265 | orchestrator | Thursday 01 January 2026 01:00:26 +0000 (0:00:00.415) 0:10:59.529 ****** 2026-01-01 01:01:46.604269 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.604275 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.604279 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.604283 | orchestrator | 2026-01-01 01:01:46.604287 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-01 01:01:46.604290 | orchestrator | Thursday 01 January 2026 01:00:28 +0000 (0:00:01.344) 0:11:00.873 ****** 2026-01-01 01:01:46.604294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.604301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.604305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.604309 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604313 | orchestrator | 2026-01-01 01:01:46.604316 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-01 01:01:46.604320 | orchestrator | Thursday 01 January 2026 01:00:29 +0000 (0:00:01.442) 0:11:02.315 ****** 2026-01-01 01:01:46.604324 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604328 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604332 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604335 | orchestrator | 2026-01-01 01:01:46.604339 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-01 01:01:46.604343 | orchestrator | 2026-01-01 01:01:46.604347 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 01:01:46.604351 | orchestrator | Thursday 01 January 2026 01:00:30 +0000 (0:00:01.330) 0:11:03.646 ****** 2026-01-01 01:01:46.604354 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.604358 | orchestrator | 2026-01-01 01:01:46.604362 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 01:01:46.604366 | orchestrator | Thursday 01 January 2026 01:00:31 +0000 (0:00:00.826) 0:11:04.472 ****** 2026-01-01 01:01:46.604370 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.604374 | orchestrator | 2026-01-01 01:01:46.604377 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 01:01:46.604381 | orchestrator | Thursday 01 January 2026 01:00:32 +0000 (0:00:00.973) 0:11:05.445 ****** 2026-01-01 01:01:46.604385 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604389 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.604393 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.604396 | orchestrator | 2026-01-01 01:01:46.604400 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 01:01:46.604404 | orchestrator | Thursday 01 January 2026 01:00:33 +0000 (0:00:00.401) 0:11:05.847 ****** 2026-01-01 01:01:46.604408 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604412 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604415 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604419 | orchestrator | 2026-01-01 01:01:46.604423 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 01:01:46.604427 | orchestrator | Thursday 01 January 2026 01:00:33 +0000 (0:00:00.735) 0:11:06.582 ****** 2026-01-01 01:01:46.604431 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604447 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604451 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604455 | orchestrator | 2026-01-01 01:01:46.604459 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 01:01:46.604463 | orchestrator | Thursday 01 January 2026 01:00:34 +0000 (0:00:00.725) 0:11:07.308 ****** 2026-01-01 01:01:46.604466 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604470 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604474 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604478 | orchestrator | 2026-01-01 01:01:46.604481 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 01:01:46.604485 | orchestrator | Thursday 01 January 2026 01:00:35 +0000 (0:00:01.075) 0:11:08.383 ****** 2026-01-01 01:01:46.604489 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604495 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.604499 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.604503 | orchestrator | 2026-01-01 01:01:46.604507 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 01:01:46.604511 | orchestrator | Thursday 01 January 2026 01:00:36 +0000 (0:00:00.302) 0:11:08.686 ****** 2026-01-01 01:01:46.604518 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604522 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.604525 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.604529 | orchestrator | 2026-01-01 01:01:46.604533 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 01:01:46.604537 | orchestrator | Thursday 01 January 2026 01:00:36 +0000 (0:00:00.341) 0:11:09.027 ****** 2026-01-01 01:01:46.604540 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604544 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.604548 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.604552 | orchestrator | 2026-01-01 01:01:46.604555 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 01:01:46.604559 | orchestrator | Thursday 01 January 2026 01:00:36 +0000 (0:00:00.333) 0:11:09.361 ****** 2026-01-01 01:01:46.604563 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604567 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604571 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604574 | orchestrator | 2026-01-01 01:01:46.604578 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 01:01:46.604582 | orchestrator | Thursday 01 January 2026 01:00:37 +0000 (0:00:01.131) 0:11:10.493 ****** 2026-01-01 01:01:46.604586 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604590 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604593 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604597 | orchestrator | 2026-01-01 01:01:46.604601 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 01:01:46.604605 | orchestrator | Thursday 01 January 2026 01:00:38 +0000 (0:00:00.835) 0:11:11.329 ****** 2026-01-01 01:01:46.604608 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604612 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.604616 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.604620 | orchestrator | 2026-01-01 01:01:46.604626 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 01:01:46.604630 | orchestrator | Thursday 01 January 2026 01:00:39 +0000 (0:00:00.370) 0:11:11.699 ****** 2026-01-01 01:01:46.604634 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604637 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.604641 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.604645 | orchestrator | 2026-01-01 01:01:46.604649 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 01:01:46.604653 | orchestrator | Thursday 01 January 2026 01:00:39 +0000 (0:00:00.315) 0:11:12.014 ****** 2026-01-01 01:01:46.604656 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604660 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604664 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604668 | orchestrator | 2026-01-01 01:01:46.604671 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 01:01:46.604675 | orchestrator | Thursday 01 January 2026 01:00:39 +0000 (0:00:00.615) 0:11:12.630 ****** 2026-01-01 01:01:46.604679 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604683 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604687 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604690 | orchestrator | 2026-01-01 01:01:46.604694 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 01:01:46.604698 | orchestrator | Thursday 01 January 2026 01:00:40 +0000 (0:00:00.468) 0:11:13.098 ****** 2026-01-01 01:01:46.604702 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604705 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604709 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604713 | orchestrator | 2026-01-01 01:01:46.604717 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 01:01:46.604721 | orchestrator | Thursday 01 January 2026 01:00:40 +0000 (0:00:00.358) 0:11:13.456 ****** 2026-01-01 01:01:46.604724 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604728 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.604735 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.604739 | orchestrator | 2026-01-01 01:01:46.604743 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 01:01:46.604747 | orchestrator | Thursday 01 January 2026 01:00:41 +0000 (0:00:00.333) 0:11:13.790 ****** 2026-01-01 01:01:46.604751 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604754 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.604758 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.604762 | orchestrator | 2026-01-01 01:01:46.604766 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 01:01:46.604770 | orchestrator | Thursday 01 January 2026 01:00:41 +0000 (0:00:00.667) 0:11:14.458 ****** 2026-01-01 01:01:46.604773 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604777 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.604781 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.604785 | orchestrator | 2026-01-01 01:01:46.604788 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 01:01:46.604792 | orchestrator | Thursday 01 January 2026 01:00:42 +0000 (0:00:00.351) 0:11:14.809 ****** 2026-01-01 01:01:46.604796 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604800 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604804 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604807 | orchestrator | 2026-01-01 01:01:46.604811 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 01:01:46.604815 | orchestrator | Thursday 01 January 2026 01:00:42 +0000 (0:00:00.347) 0:11:15.157 ****** 2026-01-01 01:01:46.604819 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.604822 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.604826 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.604830 | orchestrator | 2026-01-01 01:01:46.604834 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-01 01:01:46.604838 | orchestrator | Thursday 01 January 2026 01:00:43 +0000 (0:00:00.831) 0:11:15.988 ****** 2026-01-01 01:01:46.604844 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.604848 | orchestrator | 2026-01-01 01:01:46.604852 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-01 01:01:46.604856 | orchestrator | Thursday 01 January 2026 01:00:43 +0000 (0:00:00.547) 0:11:16.536 ****** 2026-01-01 01:01:46.604859 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.604863 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 01:01:46.604867 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 01:01:46.604871 | orchestrator | 2026-01-01 01:01:46.604875 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-01 01:01:46.604879 | orchestrator | Thursday 01 January 2026 01:00:46 +0000 (0:00:02.240) 0:11:18.776 ****** 2026-01-01 01:01:46.604882 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-01 01:01:46.604886 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 01:01:46.604890 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.604894 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-01 01:01:46.604898 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-01 01:01:46.604901 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.604905 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-01 01:01:46.604909 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-01 01:01:46.604913 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.604917 | orchestrator | 2026-01-01 01:01:46.604920 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-01 01:01:46.604924 | orchestrator | Thursday 01 January 2026 01:00:47 +0000 (0:00:01.202) 0:11:19.979 ****** 2026-01-01 01:01:46.604928 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.604932 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.604940 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.604944 | orchestrator | 2026-01-01 01:01:46.604948 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-01 01:01:46.604952 | orchestrator | Thursday 01 January 2026 01:00:48 +0000 (0:00:00.716) 0:11:20.695 ****** 2026-01-01 01:01:46.604956 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.604959 | orchestrator | 2026-01-01 01:01:46.604963 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-01 01:01:46.604967 | orchestrator | Thursday 01 January 2026 01:00:48 +0000 (0:00:00.630) 0:11:21.326 ****** 2026-01-01 01:01:46.604971 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.605027 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.605042 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.605046 | orchestrator | 2026-01-01 01:01:46.605050 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-01 01:01:46.605054 | orchestrator | Thursday 01 January 2026 01:00:49 +0000 (0:00:00.811) 0:11:22.138 ****** 2026-01-01 01:01:46.605058 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.605062 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-01 01:01:46.605066 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.605069 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.605073 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-01 01:01:46.605077 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-01 01:01:46.605081 | orchestrator | 2026-01-01 01:01:46.605085 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-01 01:01:46.605089 | orchestrator | Thursday 01 January 2026 01:00:54 +0000 (0:00:05.001) 0:11:27.139 ****** 2026-01-01 01:01:46.605093 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.605097 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.605100 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 01:01:46.605104 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 01:01:46.605108 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:01:46.605112 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 01:01:46.605116 | orchestrator | 2026-01-01 01:01:46.605120 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-01 01:01:46.605123 | orchestrator | Thursday 01 January 2026 01:00:57 +0000 (0:00:02.510) 0:11:29.650 ****** 2026-01-01 01:01:46.605127 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-01 01:01:46.605131 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.605135 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-01 01:01:46.605139 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.605143 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-01 01:01:46.605146 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.605150 | orchestrator | 2026-01-01 01:01:46.605158 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-01 01:01:46.605169 | orchestrator | Thursday 01 January 2026 01:00:58 +0000 (0:00:01.273) 0:11:30.924 ****** 2026-01-01 01:01:46.605173 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-01 01:01:46.605176 | orchestrator | 2026-01-01 01:01:46.605180 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-01 01:01:46.605184 | orchestrator | Thursday 01 January 2026 01:00:58 +0000 (0:00:00.236) 0:11:31.160 ****** 2026-01-01 01:01:46.605188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 01:01:46.605192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 01:01:46.605196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 01:01:46.605199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 01:01:46.605203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 01:01:46.605207 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.605211 | orchestrator | 2026-01-01 01:01:46.605215 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-01 01:01:46.605219 | orchestrator | Thursday 01 January 2026 01:00:59 +0000 (0:00:00.939) 0:11:32.099 ****** 2026-01-01 01:01:46.605222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 01:01:46.605229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 01:01:46.605233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 01:01:46.605236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 01:01:46.605240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 01:01:46.605244 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.605248 | orchestrator | 2026-01-01 01:01:46.605252 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-01 01:01:46.605256 | orchestrator | Thursday 01 January 2026 01:01:00 +0000 (0:00:01.215) 0:11:33.315 ****** 2026-01-01 01:01:46.605259 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-01 01:01:46.605263 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-01 01:01:46.605267 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-01 01:01:46.605271 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-01 01:01:46.605275 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-01 01:01:46.605279 | orchestrator | 2026-01-01 01:01:46.605282 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-01 01:01:46.605286 | orchestrator | Thursday 01 January 2026 01:01:31 +0000 (0:00:31.300) 0:12:04.615 ****** 2026-01-01 01:01:46.605290 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.605298 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.605301 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.605305 | orchestrator | 2026-01-01 01:01:46.605309 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-01 01:01:46.605313 | orchestrator | Thursday 01 January 2026 01:01:32 +0000 (0:00:00.354) 0:12:04.970 ****** 2026-01-01 01:01:46.605317 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.605321 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.605324 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.605328 | orchestrator | 2026-01-01 01:01:46.605332 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-01 01:01:46.605336 | orchestrator | Thursday 01 January 2026 01:01:32 +0000 (0:00:00.327) 0:12:05.298 ****** 2026-01-01 01:01:46.605340 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.605343 | orchestrator | 2026-01-01 01:01:46.605347 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-01 01:01:46.605351 | orchestrator | Thursday 01 January 2026 01:01:33 +0000 (0:00:00.854) 0:12:06.152 ****** 2026-01-01 01:01:46.605357 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.605361 | orchestrator | 2026-01-01 01:01:46.605365 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-01 01:01:46.605369 | orchestrator | Thursday 01 January 2026 01:01:34 +0000 (0:00:00.576) 0:12:06.729 ****** 2026-01-01 01:01:46.605373 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.605377 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.605380 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.605384 | orchestrator | 2026-01-01 01:01:46.605388 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-01 01:01:46.605392 | orchestrator | Thursday 01 January 2026 01:01:35 +0000 (0:00:01.350) 0:12:08.079 ****** 2026-01-01 01:01:46.605396 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.605399 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.605403 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.605407 | orchestrator | 2026-01-01 01:01:46.605411 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-01 01:01:46.605415 | orchestrator | Thursday 01 January 2026 01:01:36 +0000 (0:00:01.497) 0:12:09.577 ****** 2026-01-01 01:01:46.605418 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:01:46.605422 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:01:46.605426 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:01:46.605430 | orchestrator | 2026-01-01 01:01:46.605448 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-01 01:01:46.605455 | orchestrator | Thursday 01 January 2026 01:01:38 +0000 (0:00:01.883) 0:12:11.461 ****** 2026-01-01 01:01:46.605462 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.605468 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.605474 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-01 01:01:46.605481 | orchestrator | 2026-01-01 01:01:46.605490 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 01:01:46.605496 | orchestrator | Thursday 01 January 2026 01:01:41 +0000 (0:00:02.963) 0:12:14.424 ****** 2026-01-01 01:01:46.605502 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.605508 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.605514 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.605520 | orchestrator | 2026-01-01 01:01:46.605526 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-01 01:01:46.605536 | orchestrator | Thursday 01 January 2026 01:01:42 +0000 (0:00:00.375) 0:12:14.800 ****** 2026-01-01 01:01:46.605542 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:46.605548 | orchestrator | 2026-01-01 01:01:46.605553 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-01 01:01:46.605559 | orchestrator | Thursday 01 January 2026 01:01:42 +0000 (0:00:00.627) 0:12:15.427 ****** 2026-01-01 01:01:46.605565 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.605571 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.605577 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.605583 | orchestrator | 2026-01-01 01:01:46.605589 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-01 01:01:46.605595 | orchestrator | Thursday 01 January 2026 01:01:43 +0000 (0:00:00.656) 0:12:16.083 ****** 2026-01-01 01:01:46.605602 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.605606 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:46.605610 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:46.605614 | orchestrator | 2026-01-01 01:01:46.605618 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-01 01:01:46.605622 | orchestrator | Thursday 01 January 2026 01:01:43 +0000 (0:00:00.359) 0:12:16.443 ****** 2026-01-01 01:01:46.605625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:01:46.605629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:01:46.605633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:01:46.605637 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:46.605640 | orchestrator | 2026-01-01 01:01:46.605644 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-01 01:01:46.605648 | orchestrator | Thursday 01 January 2026 01:01:44 +0000 (0:00:00.638) 0:12:17.082 ****** 2026-01-01 01:01:46.605652 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:46.605656 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:46.605659 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:46.605663 | orchestrator | 2026-01-01 01:01:46.605667 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:01:46.605671 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-01 01:01:46.605675 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-01 01:01:46.605679 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-01 01:01:46.605683 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-01 01:01:46.605686 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-01 01:01:46.605695 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-01 01:01:46.605698 | orchestrator | 2026-01-01 01:01:46.605702 | orchestrator | 2026-01-01 01:01:46.605706 | orchestrator | 2026-01-01 01:01:46.605710 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:01:46.605715 | orchestrator | Thursday 01 January 2026 01:01:44 +0000 (0:00:00.270) 0:12:17.353 ****** 2026-01-01 01:01:46.605722 | orchestrator | =============================================================================== 2026-01-01 01:01:46.605727 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 59.79s 2026-01-01 01:01:46.605733 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.27s 2026-01-01 01:01:46.605743 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.21s 2026-01-01 01:01:46.605749 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.30s 2026-01-01 01:01:46.605755 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.23s 2026-01-01 01:01:46.605760 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.36s 2026-01-01 01:01:46.605765 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.69s 2026-01-01 01:01:46.605771 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.88s 2026-01-01 01:01:46.605776 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.40s 2026-01-01 01:01:46.605781 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.24s 2026-01-01 01:01:46.605787 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 8.01s 2026-01-01 01:01:46.605792 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.77s 2026-01-01 01:01:46.605799 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 5.47s 2026-01-01 01:01:46.605822 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.33s 2026-01-01 01:01:46.605828 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.00s 2026-01-01 01:01:46.605833 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.69s 2026-01-01 01:01:46.605838 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.43s 2026-01-01 01:01:46.605844 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.35s 2026-01-01 01:01:46.605849 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 4.34s 2026-01-01 01:01:46.605855 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.91s 2026-01-01 01:01:46.605861 | orchestrator | 2026-01-01 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:49.631023 | orchestrator | 2026-01-01 01:01:49 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:01:49.633032 | orchestrator | 2026-01-01 01:01:49 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:49.635135 | orchestrator | 2026-01-01 01:01:49 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:49.635175 | orchestrator | 2026-01-01 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:52.680335 | orchestrator | 2026-01-01 01:01:52 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:01:52.681017 | orchestrator | 2026-01-01 01:01:52 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:52.682384 | orchestrator | 2026-01-01 01:01:52 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:52.682684 | orchestrator | 2026-01-01 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:55.754343 | orchestrator | 2026-01-01 01:01:55 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:01:55.756419 | orchestrator | 2026-01-01 01:01:55 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:55.758723 | orchestrator | 2026-01-01 01:01:55 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:55.759061 | orchestrator | 2026-01-01 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:58.811558 | orchestrator | 2026-01-01 01:01:58 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:01:58.813950 | orchestrator | 2026-01-01 01:01:58 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:01:58.817863 | orchestrator | 2026-01-01 01:01:58 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:01:58.817904 | orchestrator | 2026-01-01 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:01.861855 | orchestrator | 2026-01-01 01:02:01 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:01.862997 | orchestrator | 2026-01-01 01:02:01 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:01.863911 | orchestrator | 2026-01-01 01:02:01 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:01.864219 | orchestrator | 2026-01-01 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:04.915136 | orchestrator | 2026-01-01 01:02:04 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:04.917557 | orchestrator | 2026-01-01 01:02:04 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:04.919635 | orchestrator | 2026-01-01 01:02:04 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:04.920195 | orchestrator | 2026-01-01 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:07.971037 | orchestrator | 2026-01-01 01:02:07 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:07.974730 | orchestrator | 2026-01-01 01:02:07 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:07.977943 | orchestrator | 2026-01-01 01:02:07 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:07.978697 | orchestrator | 2026-01-01 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:11.037095 | orchestrator | 2026-01-01 01:02:11 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:11.039663 | orchestrator | 2026-01-01 01:02:11 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:11.042610 | orchestrator | 2026-01-01 01:02:11 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:11.042643 | orchestrator | 2026-01-01 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:14.084588 | orchestrator | 2026-01-01 01:02:14 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:14.087017 | orchestrator | 2026-01-01 01:02:14 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:14.089138 | orchestrator | 2026-01-01 01:02:14 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:14.089477 | orchestrator | 2026-01-01 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:17.143703 | orchestrator | 2026-01-01 01:02:17 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:17.145130 | orchestrator | 2026-01-01 01:02:17 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:17.148505 | orchestrator | 2026-01-01 01:02:17 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:17.148567 | orchestrator | 2026-01-01 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:20.199966 | orchestrator | 2026-01-01 01:02:20 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:20.201363 | orchestrator | 2026-01-01 01:02:20 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:20.203045 | orchestrator | 2026-01-01 01:02:20 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:20.203308 | orchestrator | 2026-01-01 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:23.251556 | orchestrator | 2026-01-01 01:02:23 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:23.252807 | orchestrator | 2026-01-01 01:02:23 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:23.255761 | orchestrator | 2026-01-01 01:02:23 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:23.256077 | orchestrator | 2026-01-01 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:26.305338 | orchestrator | 2026-01-01 01:02:26 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:26.307944 | orchestrator | 2026-01-01 01:02:26 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:26.308895 | orchestrator | 2026-01-01 01:02:26 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:26.308933 | orchestrator | 2026-01-01 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:29.357341 | orchestrator | 2026-01-01 01:02:29 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:29.359163 | orchestrator | 2026-01-01 01:02:29 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:29.361548 | orchestrator | 2026-01-01 01:02:29 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:29.361594 | orchestrator | 2026-01-01 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:32.407668 | orchestrator | 2026-01-01 01:02:32 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:32.409378 | orchestrator | 2026-01-01 01:02:32 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:32.411798 | orchestrator | 2026-01-01 01:02:32 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:32.411880 | orchestrator | 2026-01-01 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:35.458353 | orchestrator | 2026-01-01 01:02:35 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:35.460227 | orchestrator | 2026-01-01 01:02:35 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:35.463058 | orchestrator | 2026-01-01 01:02:35 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:35.463850 | orchestrator | 2026-01-01 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:38.509868 | orchestrator | 2026-01-01 01:02:38 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:38.513243 | orchestrator | 2026-01-01 01:02:38 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:38.515805 | orchestrator | 2026-01-01 01:02:38 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:38.515852 | orchestrator | 2026-01-01 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:41.569350 | orchestrator | 2026-01-01 01:02:41 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:41.570909 | orchestrator | 2026-01-01 01:02:41 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:41.574378 | orchestrator | 2026-01-01 01:02:41 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:41.574871 | orchestrator | 2026-01-01 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:44.642255 | orchestrator | 2026-01-01 01:02:44 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:44.644379 | orchestrator | 2026-01-01 01:02:44 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:44.646682 | orchestrator | 2026-01-01 01:02:44 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:44.646960 | orchestrator | 2026-01-01 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:47.700856 | orchestrator | 2026-01-01 01:02:47 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:47.704472 | orchestrator | 2026-01-01 01:02:47 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state STARTED 2026-01-01 01:02:47.705796 | orchestrator | 2026-01-01 01:02:47 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:47.705825 | orchestrator | 2026-01-01 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:50.755976 | orchestrator | 2026-01-01 01:02:50 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:50.757576 | orchestrator | 2026-01-01 01:02:50 | INFO  | Task c3d554b5-85e1-4c60-ad8e-1d6366368f31 is in state SUCCESS 2026-01-01 01:02:50.760040 | orchestrator | 2026-01-01 01:02:50.760082 | orchestrator | 2026-01-01 01:02:50.760095 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:02:50.760108 | orchestrator | 2026-01-01 01:02:50.760119 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:02:50.760131 | orchestrator | Thursday 01 January 2026 00:59:52 +0000 (0:00:00.291) 0:00:00.291 ****** 2026-01-01 01:02:50.760142 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:02:50.760155 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:02:50.760166 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:02:50.760177 | orchestrator | 2026-01-01 01:02:50.760188 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:02:50.760199 | orchestrator | Thursday 01 January 2026 00:59:52 +0000 (0:00:00.309) 0:00:00.601 ****** 2026-01-01 01:02:50.760212 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-01 01:02:50.760224 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-01 01:02:50.760236 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-01 01:02:50.760247 | orchestrator | 2026-01-01 01:02:50.760258 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-01 01:02:50.760269 | orchestrator | 2026-01-01 01:02:50.760280 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-01 01:02:50.760291 | orchestrator | Thursday 01 January 2026 00:59:53 +0000 (0:00:00.537) 0:00:01.138 ****** 2026-01-01 01:02:50.760302 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:02:50.760313 | orchestrator | 2026-01-01 01:02:50.760329 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-01 01:02:50.760349 | orchestrator | Thursday 01 January 2026 00:59:53 +0000 (0:00:00.567) 0:00:01.706 ****** 2026-01-01 01:02:50.760368 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 01:02:50.760864 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 01:02:50.760886 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 01:02:50.760899 | orchestrator | 2026-01-01 01:02:50.760912 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-01 01:02:50.760925 | orchestrator | Thursday 01 January 2026 00:59:54 +0000 (0:00:00.837) 0:00:02.543 ****** 2026-01-01 01:02:50.760972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.761015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.761042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.761058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.761079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.761102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.761114 | orchestrator | 2026-01-01 01:02:50.761126 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-01 01:02:50.761137 | orchestrator | Thursday 01 January 2026 00:59:56 +0000 (0:00:01.826) 0:00:04.370 ****** 2026-01-01 01:02:50.761148 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:02:50.761159 | orchestrator | 2026-01-01 01:02:50.761171 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-01 01:02:50.761190 | orchestrator | Thursday 01 January 2026 00:59:56 +0000 (0:00:00.573) 0:00:04.943 ****** 2026-01-01 01:02:50.761203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.761215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.761239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.761252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.761274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.761288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.761307 | orchestrator | 2026-01-01 01:02:50.761319 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-01 01:02:50.761330 | orchestrator | Thursday 01 January 2026 00:59:59 +0000 (0:00:02.582) 0:00:07.525 ****** 2026-01-01 01:02:50.761346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:02:50.761359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:02:50.761379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 01:02:50.761427 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:02:50.761446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 01:02:50.761459 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:02:50.761471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:02:50.761491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 01:02:50.761503 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:02:50.761515 | orchestrator | 2026-01-01 01:02:50.761526 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-01 01:02:50.761538 | orchestrator | Thursday 01 January 2026 01:00:01 +0000 (0:00:01.582) 0:00:09.108 ****** 2026-01-01 01:02:50.761549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:02:50.761576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:02:50.761757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 01:02:50.761889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 01:02:50.761908 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:02:50.761920 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:02:50.761942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:02:50.761960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 01:02:50.761973 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:02:50.761984 | orchestrator | 2026-01-01 01:02:50.761995 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-01 01:02:50.762007 | orchestrator | Thursday 01 January 2026 01:00:02 +0000 (0:00:01.131) 0:00:10.239 ****** 2026-01-01 01:02:50.762063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.762089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.762108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.762131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.762144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.762163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.762183 | orchestrator | 2026-01-01 01:02:50.762194 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-01 01:02:50.762205 | orchestrator | Thursday 01 January 2026 01:00:04 +0000 (0:00:02.790) 0:00:13.030 ****** 2026-01-01 01:02:50.762216 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:02:50.762228 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:02:50.762239 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:02:50.762249 | orchestrator | 2026-01-01 01:02:50.762260 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-01 01:02:50.762271 | orchestrator | Thursday 01 January 2026 01:00:07 +0000 (0:00:02.798) 0:00:15.829 ****** 2026-01-01 01:02:50.762282 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:02:50.762293 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:02:50.762304 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:02:50.762315 | orchestrator | 2026-01-01 01:02:50.762326 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-01-01 01:02:50.762337 | orchestrator | Thursday 01 January 2026 01:00:10 +0000 (0:00:02.398) 0:00:18.227 ****** 2026-01-01 01:02:50.762354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.762366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.762378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:02:50.762426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.762445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.762458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-01 01:02:50.762470 | orchestrator | 2026-01-01 01:02:50.762482 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-01-01 01:02:50.762500 | orchestrator | Thursday 01 January 2026 01:00:12 +0000 (0:00:02.488) 0:00:20.716 ****** 2026-01-01 01:02:50.762511 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 01:02:50.762524 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:02:50.762537 | orchestrator | } 2026-01-01 01:02:50.762551 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 01:02:50.762563 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:02:50.762577 | orchestrator | } 2026-01-01 01:02:50.762590 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 01:02:50.762604 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:02:50.762616 | orchestrator | } 2026-01-01 01:02:50.762629 | orchestrator | 2026-01-01 01:02:50.762642 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 01:02:50.762661 | orchestrator | Thursday 01 January 2026 01:00:13 +0000 (0:00:00.395) 0:00:21.111 ****** 2026-01-01 01:02:50.762675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:02:50.762690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 01:02:50.762705 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:02:50.762723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:02:50.762742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 01:02:50.762761 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:02:50.762773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:02:50.762785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-01 01:02:50.762797 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:02:50.762808 | orchestrator | 2026-01-01 01:02:50.762819 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-01 01:02:50.762834 | orchestrator | Thursday 01 January 2026 01:00:14 +0000 (0:00:01.762) 0:00:22.874 ****** 2026-01-01 01:02:50.762846 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:02:50.762857 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:02:50.762867 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:02:50.762879 | orchestrator | 2026-01-01 01:02:50.762889 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-01 01:02:50.762900 | orchestrator | Thursday 01 January 2026 01:00:15 +0000 (0:00:00.386) 0:00:23.260 ****** 2026-01-01 01:02:50.762911 | orchestrator | 2026-01-01 01:02:50.762922 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-01 01:02:50.762939 | orchestrator | Thursday 01 January 2026 01:00:15 +0000 (0:00:00.141) 0:00:23.402 ****** 2026-01-01 01:02:50.762950 | orchestrator | 2026-01-01 01:02:50.762961 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-01 01:02:50.762972 | orchestrator | Thursday 01 January 2026 01:00:15 +0000 (0:00:00.223) 0:00:23.626 ****** 2026-01-01 01:02:50.762983 | orchestrator | 2026-01-01 01:02:50.762994 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-01 01:02:50.763005 | orchestrator | Thursday 01 January 2026 01:00:15 +0000 (0:00:00.083) 0:00:23.709 ****** 2026-01-01 01:02:50.763015 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:02:50.763026 | orchestrator | 2026-01-01 01:02:50.763037 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-01 01:02:50.763048 | orchestrator | Thursday 01 January 2026 01:00:15 +0000 (0:00:00.302) 0:00:24.012 ****** 2026-01-01 01:02:50.763059 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:02:50.763071 | orchestrator | 2026-01-01 01:02:50.763081 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-01 01:02:50.763092 | orchestrator | Thursday 01 January 2026 01:00:16 +0000 (0:00:00.303) 0:00:24.316 ****** 2026-01-01 01:02:50.763103 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:02:50.763114 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:02:50.763125 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:02:50.763136 | orchestrator | 2026-01-01 01:02:50.763147 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-01 01:02:50.763158 | orchestrator | Thursday 01 January 2026 01:01:14 +0000 (0:00:57.786) 0:01:22.102 ****** 2026-01-01 01:02:50.763168 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:02:50.763185 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:02:50.763204 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:02:50.763224 | orchestrator | 2026-01-01 01:02:50.763243 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-01 01:02:50.763264 | orchestrator | Thursday 01 January 2026 01:02:36 +0000 (0:01:22.449) 0:02:44.551 ****** 2026-01-01 01:02:50.763283 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:02:50.763294 | orchestrator | 2026-01-01 01:02:50.763305 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-01 01:02:50.763316 | orchestrator | Thursday 01 January 2026 01:02:37 +0000 (0:00:00.529) 0:02:45.081 ****** 2026-01-01 01:02:50.763327 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:02:50.763338 | orchestrator | 2026-01-01 01:02:50.763349 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-01 01:02:50.763360 | orchestrator | Thursday 01 January 2026 01:02:39 +0000 (0:00:02.644) 0:02:47.725 ****** 2026-01-01 01:02:50.763371 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:02:50.763382 | orchestrator | 2026-01-01 01:02:50.763465 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-01 01:02:50.763481 | orchestrator | Thursday 01 January 2026 01:02:42 +0000 (0:00:02.510) 0:02:50.235 ****** 2026-01-01 01:02:50.763492 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:02:50.763503 | orchestrator | 2026-01-01 01:02:50.763514 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-01 01:02:50.763524 | orchestrator | Thursday 01 January 2026 01:02:45 +0000 (0:00:03.689) 0:02:53.924 ****** 2026-01-01 01:02:50.763535 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:02:50.763546 | orchestrator | 2026-01-01 01:02:50.763557 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:02:50.763569 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-01 01:02:50.763582 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 01:02:50.763602 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 01:02:50.763613 | orchestrator | 2026-01-01 01:02:50.763624 | orchestrator | 2026-01-01 01:02:50.763635 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:02:50.763646 | orchestrator | Thursday 01 January 2026 01:02:48 +0000 (0:00:02.745) 0:02:56.670 ****** 2026-01-01 01:02:50.763657 | orchestrator | =============================================================================== 2026-01-01 01:02:50.763668 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 82.45s 2026-01-01 01:02:50.763679 | orchestrator | opensearch : Restart opensearch container ------------------------------ 57.79s 2026-01-01 01:02:50.763690 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.69s 2026-01-01 01:02:50.763700 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.80s 2026-01-01 01:02:50.763711 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.79s 2026-01-01 01:02:50.763722 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.75s 2026-01-01 01:02:50.763733 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.64s 2026-01-01 01:02:50.763749 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.58s 2026-01-01 01:02:50.763761 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.51s 2026-01-01 01:02:50.763772 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.49s 2026-01-01 01:02:50.763782 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.40s 2026-01-01 01:02:50.763793 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.83s 2026-01-01 01:02:50.763804 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.76s 2026-01-01 01:02:50.763815 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.58s 2026-01-01 01:02:50.763825 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.13s 2026-01-01 01:02:50.763836 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.84s 2026-01-01 01:02:50.763847 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-01-01 01:02:50.763858 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-01-01 01:02:50.763869 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-01-01 01:02:50.763880 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-01-01 01:02:50.763890 | orchestrator | 2026-01-01 01:02:50 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:50.763900 | orchestrator | 2026-01-01 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:53.804624 | orchestrator | 2026-01-01 01:02:53 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:53.805928 | orchestrator | 2026-01-01 01:02:53 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:53.805972 | orchestrator | 2026-01-01 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:56.853845 | orchestrator | 2026-01-01 01:02:56 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:56.855716 | orchestrator | 2026-01-01 01:02:56 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:56.855800 | orchestrator | 2026-01-01 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:59.894497 | orchestrator | 2026-01-01 01:02:59 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:02:59.897960 | orchestrator | 2026-01-01 01:02:59 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:02:59.898166 | orchestrator | 2026-01-01 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:02.946538 | orchestrator | 2026-01-01 01:03:02 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:02.948177 | orchestrator | 2026-01-01 01:03:02 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:03:02.948209 | orchestrator | 2026-01-01 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:05.998619 | orchestrator | 2026-01-01 01:03:05 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:05.998938 | orchestrator | 2026-01-01 01:03:06 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:03:05.998982 | orchestrator | 2026-01-01 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:09.044050 | orchestrator | 2026-01-01 01:03:09 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:09.047854 | orchestrator | 2026-01-01 01:03:09 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:03:09.047934 | orchestrator | 2026-01-01 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:12.094159 | orchestrator | 2026-01-01 01:03:12 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:12.095274 | orchestrator | 2026-01-01 01:03:12 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:03:12.095321 | orchestrator | 2026-01-01 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:15.159078 | orchestrator | 2026-01-01 01:03:15 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:15.160898 | orchestrator | 2026-01-01 01:03:15 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:03:15.160948 | orchestrator | 2026-01-01 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:18.213621 | orchestrator | 2026-01-01 01:03:18 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:18.214783 | orchestrator | 2026-01-01 01:03:18 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:03:18.215092 | orchestrator | 2026-01-01 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:21.248927 | orchestrator | 2026-01-01 01:03:21 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:21.250821 | orchestrator | 2026-01-01 01:03:21 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state STARTED 2026-01-01 01:03:21.251012 | orchestrator | 2026-01-01 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:24.309359 | orchestrator | 2026-01-01 01:03:24 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:24.311139 | orchestrator | 2026-01-01 01:03:24 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:24.313032 | orchestrator | 2026-01-01 01:03:24 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:24.318151 | orchestrator | 2026-01-01 01:03:24 | INFO  | Task 3f41172d-7969-44fc-a814-acd72bf80206 is in state SUCCESS 2026-01-01 01:03:24.320123 | orchestrator | 2026-01-01 01:03:24.320196 | orchestrator | 2026-01-01 01:03:24.320226 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-01 01:03:24.320243 | orchestrator | 2026-01-01 01:03:24.320259 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-01 01:03:24.320306 | orchestrator | Thursday 01 January 2026 00:59:52 +0000 (0:00:00.105) 0:00:00.105 ****** 2026-01-01 01:03:24.320322 | orchestrator | ok: [localhost] => { 2026-01-01 01:03:24.320339 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-01 01:03:24.320353 | orchestrator | } 2026-01-01 01:03:24.320367 | orchestrator | 2026-01-01 01:03:24.320405 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-01 01:03:24.320418 | orchestrator | Thursday 01 January 2026 00:59:52 +0000 (0:00:00.048) 0:00:00.154 ****** 2026-01-01 01:03:24.320431 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-01 01:03:24.320447 | orchestrator | ...ignoring 2026-01-01 01:03:24.320461 | orchestrator | 2026-01-01 01:03:24.320475 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-01 01:03:24.320488 | orchestrator | Thursday 01 January 2026 00:59:55 +0000 (0:00:03.023) 0:00:03.177 ****** 2026-01-01 01:03:24.320501 | orchestrator | skipping: [localhost] 2026-01-01 01:03:24.320513 | orchestrator | 2026-01-01 01:03:24.320525 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-01 01:03:24.320537 | orchestrator | Thursday 01 January 2026 00:59:55 +0000 (0:00:00.061) 0:00:03.239 ****** 2026-01-01 01:03:24.320549 | orchestrator | ok: [localhost] 2026-01-01 01:03:24.320561 | orchestrator | 2026-01-01 01:03:24.320573 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:03:24.320586 | orchestrator | 2026-01-01 01:03:24.320598 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:03:24.320610 | orchestrator | Thursday 01 January 2026 00:59:55 +0000 (0:00:00.161) 0:00:03.400 ****** 2026-01-01 01:03:24.320622 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.320634 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:03:24.320646 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:03:24.320658 | orchestrator | 2026-01-01 01:03:24.320671 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:03:24.320683 | orchestrator | Thursday 01 January 2026 00:59:55 +0000 (0:00:00.365) 0:00:03.766 ****** 2026-01-01 01:03:24.320696 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-01 01:03:24.320710 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-01 01:03:24.320723 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-01 01:03:24.320736 | orchestrator | 2026-01-01 01:03:24.320748 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-01 01:03:24.320761 | orchestrator | 2026-01-01 01:03:24.320774 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-01 01:03:24.320786 | orchestrator | Thursday 01 January 2026 00:59:56 +0000 (0:00:00.585) 0:00:04.352 ****** 2026-01-01 01:03:24.320798 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-01 01:03:24.320810 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-01 01:03:24.320821 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-01 01:03:24.320831 | orchestrator | 2026-01-01 01:03:24.320843 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-01 01:03:24.320854 | orchestrator | Thursday 01 January 2026 00:59:56 +0000 (0:00:00.372) 0:00:04.724 ****** 2026-01-01 01:03:24.320866 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:03:24.320889 | orchestrator | 2026-01-01 01:03:24.320906 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-01 01:03:24.320918 | orchestrator | Thursday 01 January 2026 00:59:57 +0000 (0:00:00.659) 0:00:05.384 ****** 2026-01-01 01:03:24.321011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 01:03:24.321059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 01:03:24.321078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 01:03:24.321097 | orchestrator | 2026-01-01 01:03:24.321117 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-01 01:03:24.321130 | orchestrator | Thursday 01 January 2026 01:00:00 +0000 (0:00:02.995) 0:00:08.380 ****** 2026-01-01 01:03:24.321142 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.321155 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.321166 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.321178 | orchestrator | 2026-01-01 01:03:24.321189 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-01 01:03:24.321201 | orchestrator | Thursday 01 January 2026 01:00:01 +0000 (0:00:00.828) 0:00:09.208 ****** 2026-01-01 01:03:24.321213 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.321224 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.321236 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.321248 | orchestrator | 2026-01-01 01:03:24.321260 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-01 01:03:24.321272 | orchestrator | Thursday 01 January 2026 01:00:02 +0000 (0:00:01.694) 0:00:10.903 ****** 2026-01-01 01:03:24.321286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 01:03:24.321324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 01:03:24.321361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 01:03:24.321420 | orchestrator | 2026-01-01 01:03:24.321433 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-01 01:03:24.321446 | orchestrator | Thursday 01 January 2026 01:00:06 +0000 (0:00:03.816) 0:00:14.720 ****** 2026-01-01 01:03:24.321458 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.321477 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.321489 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.321501 | orchestrator | 2026-01-01 01:03:24.321512 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-01 01:03:24.321523 | orchestrator | Thursday 01 January 2026 01:00:07 +0000 (0:00:01.094) 0:00:15.815 ****** 2026-01-01 01:03:24.321535 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.321547 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:03:24.321559 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:03:24.321571 | orchestrator | 2026-01-01 01:03:24.321583 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-01 01:03:24.321596 | orchestrator | Thursday 01 January 2026 01:00:12 +0000 (0:00:05.036) 0:00:20.852 ****** 2026-01-01 01:03:24.321620 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:03:24.321632 | orchestrator | 2026-01-01 01:03:24.321644 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-01 01:03:24.321656 | orchestrator | Thursday 01 January 2026 01:00:13 +0000 (0:00:00.603) 0:00:21.456 ****** 2026-01-01 01:03:24.321684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.321699 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.321712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.321732 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.321759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.321773 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.321785 | orchestrator | 2026-01-01 01:03:24.321798 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-01 01:03:24.321810 | orchestrator | Thursday 01 January 2026 01:00:16 +0000 (0:00:03.244) 0:00:24.700 ****** 2026-01-01 01:03:24.321822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.321841 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.321864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.321878 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.321891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.321911 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.321923 | orchestrator | 2026-01-01 01:03:24.321935 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-01 01:03:24.321947 | orchestrator | Thursday 01 January 2026 01:00:20 +0000 (0:00:03.735) 0:00:28.436 ****** 2026-01-01 01:03:24.321972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.321986 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.322007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.322066 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.322085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.322100 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.322113 | orchestrator | 2026-01-01 01:03:24.322126 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-01-01 01:03:24.322140 | orchestrator | Thursday 01 January 2026 01:00:23 +0000 (0:00:02.679) 0:00:31.115 ****** 2026-01-01 01:03:24.322164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 01:03:24.322191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 01:03:24.322215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 01:03:24.322236 | orchestrator | 2026-01-01 01:03:24.322249 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-01-01 01:03:24.322262 | orchestrator | Thursday 01 January 2026 01:00:26 +0000 (0:00:03.583) 0:00:34.699 ****** 2026-01-01 01:03:24.322275 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 01:03:24.322289 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:03:24.322303 | orchestrator | } 2026-01-01 01:03:24.322316 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 01:03:24.322329 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:03:24.322342 | orchestrator | } 2026-01-01 01:03:24.322356 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 01:03:24.322387 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:03:24.322400 | orchestrator | } 2026-01-01 01:03:24.322411 | orchestrator | 2026-01-01 01:03:24.322421 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 01:03:24.322433 | orchestrator | Thursday 01 January 2026 01:00:27 +0000 (0:00:00.759) 0:00:35.459 ****** 2026-01-01 01:03:24.322450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.322464 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.322487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.322509 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.322527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.322541 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.322552 | orchestrator | 2026-01-01 01:03:24.322562 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-01-01 01:03:24.322573 | orchestrator | Thursday 01 January 2026 01:00:30 +0000 (0:00:02.864) 0:00:38.324 ****** 2026-01-01 01:03:24.322604 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.322616 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.322627 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.322638 | orchestrator | 2026-01-01 01:03:24.322648 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-01-01 01:03:24.322660 | orchestrator | Thursday 01 January 2026 01:00:30 +0000 (0:00:00.363) 0:00:38.687 ****** 2026-01-01 01:03:24.322671 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.322681 | orchestrator | 2026-01-01 01:03:24.322692 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-01-01 01:03:24.322703 | orchestrator | Thursday 01 January 2026 01:00:30 +0000 (0:00:00.134) 0:00:38.822 ****** 2026-01-01 01:03:24.322713 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.322723 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.322733 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.322753 | orchestrator | 2026-01-01 01:03:24.322765 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-01-01 01:03:24.322777 | orchestrator | Thursday 01 January 2026 01:00:31 +0000 (0:00:00.581) 0:00:39.404 ****** 2026-01-01 01:03:24.322796 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.322809 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.322821 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.322833 | orchestrator | 2026-01-01 01:03:24.322845 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-01-01 01:03:24.322856 | orchestrator | Thursday 01 January 2026 01:00:31 +0000 (0:00:00.340) 0:00:39.744 ****** 2026-01-01 01:03:24.322867 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.322880 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.322892 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.322904 | orchestrator | 2026-01-01 01:03:24.322916 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-01-01 01:03:24.322927 | orchestrator | Thursday 01 January 2026 01:00:32 +0000 (0:00:00.288) 0:00:40.033 ****** 2026-01-01 01:03:24.322938 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.322949 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.322961 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.322972 | orchestrator | 2026-01-01 01:03:24.322984 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-01-01 01:03:24.322995 | orchestrator | Thursday 01 January 2026 01:00:32 +0000 (0:00:00.268) 0:00:40.301 ****** 2026-01-01 01:03:24.323005 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323015 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323026 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323037 | orchestrator | 2026-01-01 01:03:24.323048 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-01-01 01:03:24.323060 | orchestrator | Thursday 01 January 2026 01:00:32 +0000 (0:00:00.434) 0:00:40.736 ****** 2026-01-01 01:03:24.323070 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323080 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323091 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323101 | orchestrator | 2026-01-01 01:03:24.323112 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-01-01 01:03:24.323123 | orchestrator | Thursday 01 January 2026 01:00:33 +0000 (0:00:00.288) 0:00:41.024 ****** 2026-01-01 01:03:24.323134 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 01:03:24.323146 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 01:03:24.323158 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 01:03:24.323169 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-01 01:03:24.323180 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-01 01:03:24.323191 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-01 01:03:24.323202 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323213 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323224 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-01 01:03:24.323236 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-01 01:03:24.323247 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-01 01:03:24.323259 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323270 | orchestrator | 2026-01-01 01:03:24.323281 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-01-01 01:03:24.323289 | orchestrator | Thursday 01 January 2026 01:00:33 +0000 (0:00:00.343) 0:00:41.367 ****** 2026-01-01 01:03:24.323296 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323303 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323309 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323316 | orchestrator | 2026-01-01 01:03:24.323323 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-01-01 01:03:24.323339 | orchestrator | Thursday 01 January 2026 01:00:33 +0000 (0:00:00.332) 0:00:41.699 ****** 2026-01-01 01:03:24.323345 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323352 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323359 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323365 | orchestrator | 2026-01-01 01:03:24.323408 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-01-01 01:03:24.323416 | orchestrator | Thursday 01 January 2026 01:00:34 +0000 (0:00:00.745) 0:00:42.445 ****** 2026-01-01 01:03:24.323422 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323428 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323434 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323441 | orchestrator | 2026-01-01 01:03:24.323447 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-01-01 01:03:24.323454 | orchestrator | Thursday 01 January 2026 01:00:34 +0000 (0:00:00.386) 0:00:42.832 ****** 2026-01-01 01:03:24.323460 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323472 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323479 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323485 | orchestrator | 2026-01-01 01:03:24.323491 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-01-01 01:03:24.323497 | orchestrator | Thursday 01 January 2026 01:00:35 +0000 (0:00:00.381) 0:00:43.213 ****** 2026-01-01 01:03:24.323504 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323510 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323516 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323522 | orchestrator | 2026-01-01 01:03:24.323529 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-01-01 01:03:24.323535 | orchestrator | Thursday 01 January 2026 01:00:35 +0000 (0:00:00.342) 0:00:43.556 ****** 2026-01-01 01:03:24.323541 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323548 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323554 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323560 | orchestrator | 2026-01-01 01:03:24.323567 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-01-01 01:03:24.323573 | orchestrator | Thursday 01 January 2026 01:00:35 +0000 (0:00:00.373) 0:00:43.930 ****** 2026-01-01 01:03:24.323579 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323585 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323592 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323598 | orchestrator | 2026-01-01 01:03:24.323604 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-01-01 01:03:24.323620 | orchestrator | Thursday 01 January 2026 01:00:36 +0000 (0:00:00.598) 0:00:44.528 ****** 2026-01-01 01:03:24.323626 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323633 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323639 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323645 | orchestrator | 2026-01-01 01:03:24.323652 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-01-01 01:03:24.323658 | orchestrator | Thursday 01 January 2026 01:00:36 +0000 (0:00:00.327) 0:00:44.856 ****** 2026-01-01 01:03:24.323666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.323679 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.323697 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.323721 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323728 | orchestrator | 2026-01-01 01:03:24.323734 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-01-01 01:03:24.323740 | orchestrator | Thursday 01 January 2026 01:00:39 +0000 (0:00:02.320) 0:00:47.177 ****** 2026-01-01 01:03:24.323747 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323753 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323759 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323765 | orchestrator | 2026-01-01 01:03:24.323772 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-01-01 01:03:24.323778 | orchestrator | Thursday 01 January 2026 01:00:39 +0000 (0:00:00.368) 0:00:47.545 ****** 2026-01-01 01:03:24.323792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.323803 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.323821 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 01:03:24.323838 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323844 | orchestrator | 2026-01-01 01:03:24.323851 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-01-01 01:03:24.323857 | orchestrator | Thursday 01 January 2026 01:00:42 +0000 (0:00:02.837) 0:00:50.383 ****** 2026-01-01 01:03:24.323863 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323869 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323876 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323882 | orchestrator | 2026-01-01 01:03:24.323888 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-01 01:03:24.323898 | orchestrator | Thursday 01 January 2026 01:00:42 +0000 (0:00:00.343) 0:00:50.726 ****** 2026-01-01 01:03:24.323905 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323911 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323917 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323928 | orchestrator | 2026-01-01 01:03:24.323934 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-01 01:03:24.323941 | orchestrator | Thursday 01 January 2026 01:00:43 +0000 (0:00:00.355) 0:00:51.081 ****** 2026-01-01 01:03:24.323947 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323953 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323959 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.323965 | orchestrator | 2026-01-01 01:03:24.323972 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-01 01:03:24.323978 | orchestrator | Thursday 01 January 2026 01:00:43 +0000 (0:00:00.339) 0:00:51.421 ****** 2026-01-01 01:03:24.323984 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.323991 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.323997 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.324003 | orchestrator | 2026-01-01 01:03:24.324009 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-01 01:03:24.324015 | orchestrator | Thursday 01 January 2026 01:00:44 +0000 (0:00:00.791) 0:00:52.212 ****** 2026-01-01 01:03:24.324022 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.324028 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.324034 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.324040 | orchestrator | 2026-01-01 01:03:24.324047 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-01 01:03:24.324053 | orchestrator | Thursday 01 January 2026 01:00:44 +0000 (0:00:00.366) 0:00:52.578 ****** 2026-01-01 01:03:24.324059 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.324065 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:03:24.324072 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:03:24.324078 | orchestrator | 2026-01-01 01:03:24.324084 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-01 01:03:24.324090 | orchestrator | Thursday 01 January 2026 01:00:45 +0000 (0:00:00.980) 0:00:53.559 ****** 2026-01-01 01:03:24.324097 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.324103 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:03:24.324109 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:03:24.324116 | orchestrator | 2026-01-01 01:03:24.324122 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-01 01:03:24.324128 | orchestrator | Thursday 01 January 2026 01:00:46 +0000 (0:00:00.728) 0:00:54.288 ****** 2026-01-01 01:03:24.324134 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.324141 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:03:24.324147 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:03:24.324153 | orchestrator | 2026-01-01 01:03:24.324159 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-01 01:03:24.324166 | orchestrator | Thursday 01 January 2026 01:00:46 +0000 (0:00:00.480) 0:00:54.769 ****** 2026-01-01 01:03:24.324173 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-01 01:03:24.324180 | orchestrator | ...ignoring 2026-01-01 01:03:24.324186 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-01 01:03:24.324192 | orchestrator | ...ignoring 2026-01-01 01:03:24.324199 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-01 01:03:24.324205 | orchestrator | ...ignoring 2026-01-01 01:03:24.324211 | orchestrator | 2026-01-01 01:03:24.324217 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-01 01:03:24.324224 | orchestrator | Thursday 01 January 2026 01:00:57 +0000 (0:00:10.782) 0:01:05.551 ****** 2026-01-01 01:03:24.324230 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.324236 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:03:24.324242 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:03:24.324252 | orchestrator | 2026-01-01 01:03:24.324259 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-01 01:03:24.324265 | orchestrator | Thursday 01 January 2026 01:00:57 +0000 (0:00:00.365) 0:01:05.917 ****** 2026-01-01 01:03:24.324271 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.324277 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.324284 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.324290 | orchestrator | 2026-01-01 01:03:24.324299 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-01 01:03:24.324305 | orchestrator | Thursday 01 January 2026 01:00:58 +0000 (0:00:00.570) 0:01:06.487 ****** 2026-01-01 01:03:24.324312 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.324318 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.324324 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.324330 | orchestrator | 2026-01-01 01:03:24.324337 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-01 01:03:24.324343 | orchestrator | Thursday 01 January 2026 01:00:58 +0000 (0:00:00.341) 0:01:06.828 ****** 2026-01-01 01:03:24.324349 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.324355 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.324362 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.324368 | orchestrator | 2026-01-01 01:03:24.324388 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-01 01:03:24.324395 | orchestrator | Thursday 01 January 2026 01:00:59 +0000 (0:00:00.398) 0:01:07.227 ****** 2026-01-01 01:03:24.324401 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.324407 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:03:24.324414 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:03:24.324420 | orchestrator | 2026-01-01 01:03:24.324426 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-01 01:03:24.324433 | orchestrator | Thursday 01 January 2026 01:00:59 +0000 (0:00:00.343) 0:01:07.570 ****** 2026-01-01 01:03:24.324439 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.324450 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.324456 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.324462 | orchestrator | 2026-01-01 01:03:24.324469 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-01 01:03:24.324475 | orchestrator | Thursday 01 January 2026 01:01:00 +0000 (0:00:00.591) 0:01:08.162 ****** 2026-01-01 01:03:24.324481 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.324487 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.324494 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-01 01:03:24.324500 | orchestrator | 2026-01-01 01:03:24.324506 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-01 01:03:24.324513 | orchestrator | Thursday 01 January 2026 01:01:00 +0000 (0:00:00.404) 0:01:08.566 ****** 2026-01-01 01:03:24.324519 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.324525 | orchestrator | 2026-01-01 01:03:24.324531 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-01 01:03:24.324538 | orchestrator | Thursday 01 January 2026 01:01:11 +0000 (0:00:10.511) 0:01:19.078 ****** 2026-01-01 01:03:24.324544 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.324550 | orchestrator | 2026-01-01 01:03:24.324556 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-01 01:03:24.324563 | orchestrator | Thursday 01 January 2026 01:01:11 +0000 (0:00:00.138) 0:01:19.216 ****** 2026-01-01 01:03:24.324569 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.324575 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.324581 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.324587 | orchestrator | 2026-01-01 01:03:24.324594 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-01 01:03:24.324600 | orchestrator | Thursday 01 January 2026 01:01:12 +0000 (0:00:00.906) 0:01:20.122 ****** 2026-01-01 01:03:24.324606 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.324616 | orchestrator | 2026-01-01 01:03:24.324622 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-01 01:03:24.324628 | orchestrator | Thursday 01 January 2026 01:01:22 +0000 (0:00:10.503) 0:01:30.626 ****** 2026-01-01 01:03:24.324634 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.324641 | orchestrator | 2026-01-01 01:03:24.324647 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-01 01:03:24.324653 | orchestrator | Thursday 01 January 2026 01:01:24 +0000 (0:00:01.595) 0:01:32.221 ****** 2026-01-01 01:03:24.324659 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.324666 | orchestrator | 2026-01-01 01:03:24.324672 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-01 01:03:24.324678 | orchestrator | Thursday 01 January 2026 01:01:26 +0000 (0:00:02.179) 0:01:34.401 ****** 2026-01-01 01:03:24.324684 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.324691 | orchestrator | 2026-01-01 01:03:24.324697 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-01 01:03:24.324703 | orchestrator | Thursday 01 January 2026 01:01:26 +0000 (0:00:00.133) 0:01:34.534 ****** 2026-01-01 01:03:24.324710 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.324716 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.324722 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.324728 | orchestrator | 2026-01-01 01:03:24.324735 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-01 01:03:24.324741 | orchestrator | Thursday 01 January 2026 01:01:26 +0000 (0:00:00.326) 0:01:34.860 ****** 2026-01-01 01:03:24.324747 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.324753 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-01 01:03:24.324760 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:03:24.324766 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:03:24.324772 | orchestrator | 2026-01-01 01:03:24.324778 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-01 01:03:24.324784 | orchestrator | skipping: no hosts matched 2026-01-01 01:03:24.324791 | orchestrator | 2026-01-01 01:03:24.324797 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-01 01:03:24.324803 | orchestrator | 2026-01-01 01:03:24.324809 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-01 01:03:24.324816 | orchestrator | Thursday 01 January 2026 01:01:27 +0000 (0:00:00.597) 0:01:35.458 ****** 2026-01-01 01:03:24.324822 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:03:24.324828 | orchestrator | 2026-01-01 01:03:24.324834 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-01 01:03:24.324840 | orchestrator | Thursday 01 January 2026 01:01:45 +0000 (0:00:18.132) 0:01:53.591 ****** 2026-01-01 01:03:24.324847 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:03:24.324853 | orchestrator | 2026-01-01 01:03:24.324863 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-01 01:03:24.324869 | orchestrator | Thursday 01 January 2026 01:02:01 +0000 (0:00:15.653) 0:02:09.245 ****** 2026-01-01 01:03:24.324875 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:03:24.324881 | orchestrator | 2026-01-01 01:03:24.324888 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-01 01:03:24.324894 | orchestrator | 2026-01-01 01:03:24.324900 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-01 01:03:24.324907 | orchestrator | Thursday 01 January 2026 01:02:03 +0000 (0:00:02.382) 0:02:11.628 ****** 2026-01-01 01:03:24.324913 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:03:24.324919 | orchestrator | 2026-01-01 01:03:24.324925 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-01 01:03:24.324932 | orchestrator | Thursday 01 January 2026 01:02:27 +0000 (0:00:24.251) 0:02:35.879 ****** 2026-01-01 01:03:24.324938 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:03:24.324944 | orchestrator | 2026-01-01 01:03:24.324954 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-01 01:03:24.324961 | orchestrator | Thursday 01 January 2026 01:02:38 +0000 (0:00:10.726) 0:02:46.606 ****** 2026-01-01 01:03:24.324967 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:03:24.324973 | orchestrator | 2026-01-01 01:03:24.324979 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-01 01:03:24.324986 | orchestrator | 2026-01-01 01:03:24.324996 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-01 01:03:24.325003 | orchestrator | Thursday 01 January 2026 01:02:41 +0000 (0:00:02.439) 0:02:49.046 ****** 2026-01-01 01:03:24.325009 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.325015 | orchestrator | 2026-01-01 01:03:24.325021 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-01 01:03:24.325028 | orchestrator | Thursday 01 January 2026 01:02:54 +0000 (0:00:12.971) 0:03:02.018 ****** 2026-01-01 01:03:24.325034 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.325040 | orchestrator | 2026-01-01 01:03:24.325046 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-01 01:03:24.325053 | orchestrator | Thursday 01 January 2026 01:02:58 +0000 (0:00:04.614) 0:03:06.632 ****** 2026-01-01 01:03:24.325059 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.325065 | orchestrator | 2026-01-01 01:03:24.325071 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-01 01:03:24.325078 | orchestrator | 2026-01-01 01:03:24.325084 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-01 01:03:24.325090 | orchestrator | Thursday 01 January 2026 01:03:00 +0000 (0:00:02.287) 0:03:08.920 ****** 2026-01-01 01:03:24.325097 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:03:24.325103 | orchestrator | 2026-01-01 01:03:24.325109 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-01 01:03:24.325116 | orchestrator | Thursday 01 January 2026 01:03:01 +0000 (0:00:00.565) 0:03:09.485 ****** 2026-01-01 01:03:24.325122 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.325128 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.325134 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.325141 | orchestrator | 2026-01-01 01:03:24.325147 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-01 01:03:24.325153 | orchestrator | Thursday 01 January 2026 01:03:03 +0000 (0:00:01.988) 0:03:11.474 ****** 2026-01-01 01:03:24.325159 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.325166 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.325172 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.325178 | orchestrator | 2026-01-01 01:03:24.325184 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-01 01:03:24.325191 | orchestrator | Thursday 01 January 2026 01:03:05 +0000 (0:00:02.447) 0:03:13.922 ****** 2026-01-01 01:03:24.325197 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.325203 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.325210 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.325216 | orchestrator | 2026-01-01 01:03:24.325222 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-01 01:03:24.325228 | orchestrator | Thursday 01 January 2026 01:03:08 +0000 (0:00:02.413) 0:03:16.335 ****** 2026-01-01 01:03:24.325235 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.325241 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.325247 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:03:24.325254 | orchestrator | 2026-01-01 01:03:24.325260 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-01 01:03:24.325267 | orchestrator | Thursday 01 January 2026 01:03:10 +0000 (0:00:02.438) 0:03:18.774 ****** 2026-01-01 01:03:24.325273 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.325279 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:03:24.325285 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:03:24.325296 | orchestrator | 2026-01-01 01:03:24.325302 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-01 01:03:24.325308 | orchestrator | Thursday 01 January 2026 01:03:15 +0000 (0:00:04.841) 0:03:23.615 ****** 2026-01-01 01:03:24.325315 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.325321 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.325327 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.325333 | orchestrator | 2026-01-01 01:03:24.325340 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-01 01:03:24.325346 | orchestrator | Thursday 01 January 2026 01:03:18 +0000 (0:00:02.567) 0:03:26.183 ****** 2026-01-01 01:03:24.325352 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.325359 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.325365 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.325404 | orchestrator | 2026-01-01 01:03:24.325412 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-01 01:03:24.325418 | orchestrator | Thursday 01 January 2026 01:03:18 +0000 (0:00:00.497) 0:03:26.680 ****** 2026-01-01 01:03:24.325424 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:03:24.325431 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:03:24.325437 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:03:24.325443 | orchestrator | 2026-01-01 01:03:24.325449 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-01 01:03:24.325459 | orchestrator | Thursday 01 January 2026 01:03:21 +0000 (0:00:02.722) 0:03:29.403 ****** 2026-01-01 01:03:24.325465 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:03:24.325472 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:03:24.325478 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:03:24.325484 | orchestrator | 2026-01-01 01:03:24.325491 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:03:24.325497 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-01 01:03:24.325504 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-01-01 01:03:24.325511 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-01 01:03:24.325518 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-01 01:03:24.325524 | orchestrator | 2026-01-01 01:03:24.325530 | orchestrator | 2026-01-01 01:03:24.325540 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:03:24.325547 | orchestrator | Thursday 01 January 2026 01:03:21 +0000 (0:00:00.345) 0:03:29.749 ****** 2026-01-01 01:03:24.325553 | orchestrator | =============================================================================== 2026-01-01 01:03:24.325560 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.38s 2026-01-01 01:03:24.325566 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.38s 2026-01-01 01:03:24.325572 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.97s 2026-01-01 01:03:24.325578 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.78s 2026-01-01 01:03:24.325585 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.51s 2026-01-01 01:03:24.325591 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 10.50s 2026-01-01 01:03:24.325597 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.04s 2026-01-01 01:03:24.325603 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.84s 2026-01-01 01:03:24.325610 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.82s 2026-01-01 01:03:24.325620 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.61s 2026-01-01 01:03:24.325627 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.82s 2026-01-01 01:03:24.325633 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.74s 2026-01-01 01:03:24.325639 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.58s 2026-01-01 01:03:24.325645 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.24s 2026-01-01 01:03:24.325652 | orchestrator | Check MariaDB service --------------------------------------------------- 3.02s 2026-01-01 01:03:24.325658 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.00s 2026-01-01 01:03:24.325664 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.86s 2026-01-01 01:03:24.325670 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.84s 2026-01-01 01:03:24.325677 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.72s 2026-01-01 01:03:24.325683 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.68s 2026-01-01 01:03:24.325689 | orchestrator | 2026-01-01 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:27.364755 | orchestrator | 2026-01-01 01:03:27 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:27.365547 | orchestrator | 2026-01-01 01:03:27 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:27.366316 | orchestrator | 2026-01-01 01:03:27 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:27.366353 | orchestrator | 2026-01-01 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:30.403534 | orchestrator | 2026-01-01 01:03:30 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:30.404572 | orchestrator | 2026-01-01 01:03:30 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:30.404618 | orchestrator | 2026-01-01 01:03:30 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:30.404625 | orchestrator | 2026-01-01 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:33.440037 | orchestrator | 2026-01-01 01:03:33 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:33.442154 | orchestrator | 2026-01-01 01:03:33 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:33.444222 | orchestrator | 2026-01-01 01:03:33 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:33.444863 | orchestrator | 2026-01-01 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:36.479433 | orchestrator | 2026-01-01 01:03:36 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:36.482283 | orchestrator | 2026-01-01 01:03:36 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:36.483740 | orchestrator | 2026-01-01 01:03:36 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:36.483768 | orchestrator | 2026-01-01 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:39.553090 | orchestrator | 2026-01-01 01:03:39 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:39.553189 | orchestrator | 2026-01-01 01:03:39 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:39.553800 | orchestrator | 2026-01-01 01:03:39 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:39.553920 | orchestrator | 2026-01-01 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:42.598133 | orchestrator | 2026-01-01 01:03:42 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:42.598246 | orchestrator | 2026-01-01 01:03:42 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:42.599015 | orchestrator | 2026-01-01 01:03:42 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:42.599112 | orchestrator | 2026-01-01 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:45.640456 | orchestrator | 2026-01-01 01:03:45 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:45.640537 | orchestrator | 2026-01-01 01:03:45 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:45.640906 | orchestrator | 2026-01-01 01:03:45 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:45.640924 | orchestrator | 2026-01-01 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:48.677282 | orchestrator | 2026-01-01 01:03:48 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:48.678755 | orchestrator | 2026-01-01 01:03:48 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:48.679904 | orchestrator | 2026-01-01 01:03:48 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:48.679922 | orchestrator | 2026-01-01 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:51.712005 | orchestrator | 2026-01-01 01:03:51 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:51.712648 | orchestrator | 2026-01-01 01:03:51 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:51.713314 | orchestrator | 2026-01-01 01:03:51 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:51.713338 | orchestrator | 2026-01-01 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:54.753070 | orchestrator | 2026-01-01 01:03:54 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:54.755777 | orchestrator | 2026-01-01 01:03:54 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:54.757107 | orchestrator | 2026-01-01 01:03:54 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:54.757205 | orchestrator | 2026-01-01 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:57.796206 | orchestrator | 2026-01-01 01:03:57 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:03:57.798750 | orchestrator | 2026-01-01 01:03:57 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:03:57.802547 | orchestrator | 2026-01-01 01:03:57 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:03:57.802588 | orchestrator | 2026-01-01 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:00.851563 | orchestrator | 2026-01-01 01:04:00 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state STARTED 2026-01-01 01:04:00.853810 | orchestrator | 2026-01-01 01:04:00 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:00.854706 | orchestrator | 2026-01-01 01:04:00 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:04:00.854758 | orchestrator | 2026-01-01 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:03.901199 | orchestrator | 2026-01-01 01:04:03 | INFO  | Task e808d1e1-cf3f-4b4d-b9c5-c51589216c9d is in state SUCCESS 2026-01-01 01:04:03.904191 | orchestrator | 2026-01-01 01:04:03.904432 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-01 01:04:03.904455 | orchestrator | 2.16.14 2026-01-01 01:04:03.904469 | orchestrator | 2026-01-01 01:04:03.904480 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-01 01:04:03.904499 | orchestrator | 2026-01-01 01:04:03.904518 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-01 01:04:03.904536 | orchestrator | Thursday 01 January 2026 01:01:50 +0000 (0:00:00.635) 0:00:00.635 ****** 2026-01-01 01:04:03.904555 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:04:03.904692 | orchestrator | 2026-01-01 01:04:03.905672 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-01 01:04:03.905713 | orchestrator | Thursday 01 January 2026 01:01:50 +0000 (0:00:00.681) 0:00:01.316 ****** 2026-01-01 01:04:03.905725 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.905737 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.905748 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.905759 | orchestrator | 2026-01-01 01:04:03.905771 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-01 01:04:03.905782 | orchestrator | Thursday 01 January 2026 01:01:51 +0000 (0:00:00.662) 0:00:01.978 ****** 2026-01-01 01:04:03.905793 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.905804 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.905815 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.905826 | orchestrator | 2026-01-01 01:04:03.905837 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-01 01:04:03.905848 | orchestrator | Thursday 01 January 2026 01:01:51 +0000 (0:00:00.315) 0:00:02.294 ****** 2026-01-01 01:04:03.905859 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.905870 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.905880 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.905891 | orchestrator | 2026-01-01 01:04:03.905902 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-01 01:04:03.905913 | orchestrator | Thursday 01 January 2026 01:01:52 +0000 (0:00:00.890) 0:00:03.184 ****** 2026-01-01 01:04:03.905924 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.905934 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.905945 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.905979 | orchestrator | 2026-01-01 01:04:03.905990 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-01 01:04:03.906001 | orchestrator | Thursday 01 January 2026 01:01:52 +0000 (0:00:00.297) 0:00:03.482 ****** 2026-01-01 01:04:03.906012 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.906059 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.906070 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.906087 | orchestrator | 2026-01-01 01:04:03.906105 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-01 01:04:03.906128 | orchestrator | Thursday 01 January 2026 01:01:53 +0000 (0:00:00.297) 0:00:03.779 ****** 2026-01-01 01:04:03.906156 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.906176 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.906194 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.906213 | orchestrator | 2026-01-01 01:04:03.906231 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-01 01:04:03.906252 | orchestrator | Thursday 01 January 2026 01:01:53 +0000 (0:00:00.330) 0:00:04.110 ****** 2026-01-01 01:04:03.906382 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.906406 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.906424 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.906443 | orchestrator | 2026-01-01 01:04:03.906462 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-01 01:04:03.906492 | orchestrator | Thursday 01 January 2026 01:01:54 +0000 (0:00:00.538) 0:00:04.648 ****** 2026-01-01 01:04:03.906548 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.906567 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.906585 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.906603 | orchestrator | 2026-01-01 01:04:03.906622 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-01 01:04:03.906788 | orchestrator | Thursday 01 January 2026 01:01:54 +0000 (0:00:00.307) 0:00:04.955 ****** 2026-01-01 01:04:03.906811 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 01:04:03.906826 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 01:04:03.906838 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 01:04:03.906952 | orchestrator | 2026-01-01 01:04:03.906972 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-01 01:04:03.906992 | orchestrator | Thursday 01 January 2026 01:01:55 +0000 (0:00:00.641) 0:00:05.597 ****** 2026-01-01 01:04:03.907012 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.907031 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.907050 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.907068 | orchestrator | 2026-01-01 01:04:03.907088 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-01 01:04:03.907107 | orchestrator | Thursday 01 January 2026 01:01:55 +0000 (0:00:00.465) 0:00:06.063 ****** 2026-01-01 01:04:03.907128 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 01:04:03.907146 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 01:04:03.907163 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 01:04:03.907174 | orchestrator | 2026-01-01 01:04:03.907185 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-01 01:04:03.907212 | orchestrator | Thursday 01 January 2026 01:01:57 +0000 (0:00:02.223) 0:00:08.287 ****** 2026-01-01 01:04:03.907223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-01 01:04:03.907235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-01 01:04:03.907246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-01 01:04:03.907257 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.907268 | orchestrator | 2026-01-01 01:04:03.907333 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-01 01:04:03.907406 | orchestrator | Thursday 01 January 2026 01:01:58 +0000 (0:00:00.656) 0:00:08.943 ****** 2026-01-01 01:04:03.907425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.907440 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.907451 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.907463 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.907474 | orchestrator | 2026-01-01 01:04:03.907485 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-01 01:04:03.907496 | orchestrator | Thursday 01 January 2026 01:01:59 +0000 (0:00:00.837) 0:00:09.781 ****** 2026-01-01 01:04:03.907510 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.907535 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.907546 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.907556 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.907569 | orchestrator | 2026-01-01 01:04:03.907581 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-01 01:04:03.907592 | orchestrator | Thursday 01 January 2026 01:01:59 +0000 (0:00:00.412) 0:00:10.193 ****** 2026-01-01 01:04:03.907607 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e0f8be22207c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-01 01:01:56.299726', 'end': '2026-01-01 01:01:56.344840', 'delta': '0:00:00.045114', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e0f8be22207c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-01 01:04:03.907631 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b41841859d2a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-01 01:01:57.051378', 'end': '2026-01-01 01:01:57.090085', 'delta': '0:00:00.038707', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b41841859d2a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-01 01:04:03.907680 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '287618fcb965', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-01 01:01:57.601819', 'end': '2026-01-01 01:01:57.648354', 'delta': '0:00:00.046535', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['287618fcb965'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-01 01:04:03.907693 | orchestrator | 2026-01-01 01:04:03.907706 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-01 01:04:03.907717 | orchestrator | Thursday 01 January 2026 01:01:59 +0000 (0:00:00.192) 0:00:10.386 ****** 2026-01-01 01:04:03.907729 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.907748 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.907760 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.907772 | orchestrator | 2026-01-01 01:04:03.907783 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-01 01:04:03.907795 | orchestrator | Thursday 01 January 2026 01:02:00 +0000 (0:00:00.459) 0:00:10.845 ****** 2026-01-01 01:04:03.907807 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-01 01:04:03.907819 | orchestrator | 2026-01-01 01:04:03.907829 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-01 01:04:03.907839 | orchestrator | Thursday 01 January 2026 01:02:02 +0000 (0:00:01.916) 0:00:12.762 ****** 2026-01-01 01:04:03.907849 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.907859 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.907868 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.907878 | orchestrator | 2026-01-01 01:04:03.907888 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-01 01:04:03.907897 | orchestrator | Thursday 01 January 2026 01:02:02 +0000 (0:00:00.307) 0:00:13.069 ****** 2026-01-01 01:04:03.907907 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.907917 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.907927 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.907936 | orchestrator | 2026-01-01 01:04:03.907946 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-01 01:04:03.907958 | orchestrator | Thursday 01 January 2026 01:02:02 +0000 (0:00:00.406) 0:00:13.476 ****** 2026-01-01 01:04:03.907974 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.907991 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.908006 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.908022 | orchestrator | 2026-01-01 01:04:03.908040 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-01 01:04:03.908057 | orchestrator | Thursday 01 January 2026 01:02:03 +0000 (0:00:00.528) 0:00:14.004 ****** 2026-01-01 01:04:03.908073 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.908087 | orchestrator | 2026-01-01 01:04:03.908097 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-01 01:04:03.908107 | orchestrator | Thursday 01 January 2026 01:02:03 +0000 (0:00:00.140) 0:00:14.145 ****** 2026-01-01 01:04:03.908119 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.908141 | orchestrator | 2026-01-01 01:04:03.908163 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-01 01:04:03.908179 | orchestrator | Thursday 01 January 2026 01:02:03 +0000 (0:00:00.237) 0:00:14.383 ****** 2026-01-01 01:04:03.908194 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.908208 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.908222 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.908236 | orchestrator | 2026-01-01 01:04:03.908250 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-01 01:04:03.908264 | orchestrator | Thursday 01 January 2026 01:02:04 +0000 (0:00:00.300) 0:00:14.684 ****** 2026-01-01 01:04:03.908278 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.908291 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.908306 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.908320 | orchestrator | 2026-01-01 01:04:03.908334 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-01 01:04:03.908379 | orchestrator | Thursday 01 January 2026 01:02:04 +0000 (0:00:00.318) 0:00:15.003 ****** 2026-01-01 01:04:03.908394 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.908410 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.908425 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.908440 | orchestrator | 2026-01-01 01:04:03.908456 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-01 01:04:03.908472 | orchestrator | Thursday 01 January 2026 01:02:05 +0000 (0:00:00.528) 0:00:15.531 ****** 2026-01-01 01:04:03.908503 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.908520 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.908536 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.908551 | orchestrator | 2026-01-01 01:04:03.908561 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-01 01:04:03.908571 | orchestrator | Thursday 01 January 2026 01:02:05 +0000 (0:00:00.342) 0:00:15.874 ****** 2026-01-01 01:04:03.908581 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.908590 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.908615 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.908626 | orchestrator | 2026-01-01 01:04:03.908635 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-01 01:04:03.908645 | orchestrator | Thursday 01 January 2026 01:02:05 +0000 (0:00:00.329) 0:00:16.203 ****** 2026-01-01 01:04:03.908655 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.908665 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.908675 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.908734 | orchestrator | 2026-01-01 01:04:03.908745 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-01 01:04:03.908755 | orchestrator | Thursday 01 January 2026 01:02:06 +0000 (0:00:00.322) 0:00:16.526 ****** 2026-01-01 01:04:03.908765 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.908775 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.908784 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.908794 | orchestrator | 2026-01-01 01:04:03.908804 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-01 01:04:03.908814 | orchestrator | Thursday 01 January 2026 01:02:06 +0000 (0:00:00.554) 0:00:17.080 ****** 2026-01-01 01:04:03.908826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1d2d931--4931--5db0--ab2b--78163b321b95-osd--block--d1d2d931--4931--5db0--ab2b--78163b321b95', 'dm-uuid-LVM-FK1QBo4OXTjtHYPrIVww6qnrCtuOsh01Ob8BpWWHOHHVW6p4e66V0K211KeJvAdQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.908839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf-osd--block--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf', 'dm-uuid-LVM-UyL3VHnPIroTg0dkBDPlejOQABxBvTgUZuLAQXT4NvTA09QOFfW1CPBYa4onRlQo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.908850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.908862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.908872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.908890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.908900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.908942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.908955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.908965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.908978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part1', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part14', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part15', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part16', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb51feb3--1d35--54b5--8ae7--a768510de0c7-osd--block--fb51feb3--1d35--54b5--8ae7--a768510de0c7', 'dm-uuid-LVM-24DBmmmuabDFfKPEzAbcmMeHV1fRiLGa6bft9tf2dZWT3PVGce68pIphUi9ykJQQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d1d2d931--4931--5db0--ab2b--78163b321b95-osd--block--d1d2d931--4931--5db0--ab2b--78163b321b95'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d8BQ8X-O0q2-8Dr6-D6Yo-uk3T-aAHA-v24QNd', 'scsi-0QEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb', 'scsi-SQEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef-osd--block--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef', 'dm-uuid-LVM-7M310GiLltp8Vnz2csjtNJlk1L0vv4Cmofoy1KrAV9BQUSRNMmV5gY2W3vALnew7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf-osd--block--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4KnCTR-rOfZ-7W2z-L9Zo-9C3G-Tf5Y-xbkafe', 'scsi-0QEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515', 'scsi-SQEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf', 'scsi-SQEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909171 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.909182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4d57ac9--6a13--5c0f--8510--006bddd85eb9-osd--block--a4d57ac9--6a13--5c0f--8510--006bddd85eb9', 'dm-uuid-LVM-hcC0V33nfWhvavdx6bYTOrPejhCRsXT6McptdcU8XwLHX58F9r79eZ6RUgC13lY9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a96b51d--dfb4--5025--9237--a7295cf21147-osd--block--3a96b51d--dfb4--5025--9237--a7295cf21147', 'dm-uuid-LVM-rtlyMB2dCL1xBVCjxdFgUZQe34RzKl2yj8TH7muRKMAgiZ4tsixX4NUg0uUmJifx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fb51feb3--1d35--54b5--8ae7--a768510de0c7-osd--block--fb51feb3--1d35--54b5--8ae7--a768510de0c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NteD8s-bHnx-ob7m-Nrex-AzPQ-gcuO-xNLEZR', 'scsi-0QEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122', 'scsi-SQEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef-osd--block--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8blfqt-AMNX-egei-T1WZ-gYXs-o9ZA-RuO3Jz', 'scsi-0QEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86', 'scsi-SQEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3', 'scsi-SQEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909562 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.909572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 01:04:03.909616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a4d57ac9--6a13--5c0f--8510--006bddd85eb9-osd--block--a4d57ac9--6a13--5c0f--8510--006bddd85eb9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hYwwY6-iOtQ-zGma-oaom-t0s8-TBDp-jDC4Fj', 'scsi-0QEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab', 'scsi-SQEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3a96b51d--dfb4--5025--9237--a7295cf21147-osd--block--3a96b51d--dfb4--5025--9237--a7295cf21147'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Nxnuu3-Xqm4-ppZR-n7Hk-J8xe-yIYD-Rl9W16', 'scsi-0QEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c', 'scsi-SQEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f', 'scsi-SQEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 01:04:03.909688 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.909698 | orchestrator | 2026-01-01 01:04:03.909708 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-01 01:04:03.909718 | orchestrator | Thursday 01 January 2026 01:02:07 +0000 (0:00:00.510) 0:00:17.591 ****** 2026-01-01 01:04:03.909730 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d1d2d931--4931--5db0--ab2b--78163b321b95-osd--block--d1d2d931--4931--5db0--ab2b--78163b321b95', 'dm-uuid-LVM-FK1QBo4OXTjtHYPrIVww6qnrCtuOsh01Ob8BpWWHOHHVW6p4e66V0K211KeJvAdQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf-osd--block--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf', 'dm-uuid-LVM-UyL3VHnPIroTg0dkBDPlejOQABxBvTgUZuLAQXT4NvTA09QOFfW1CPBYa4onRlQo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909821 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909858 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb51feb3--1d35--54b5--8ae7--a768510de0c7-osd--block--fb51feb3--1d35--54b5--8ae7--a768510de0c7', 'dm-uuid-LVM-24DBmmmuabDFfKPEzAbcmMeHV1fRiLGa6bft9tf2dZWT3PVGce68pIphUi9ykJQQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909881 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part1', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part14', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part15', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part16', 'scsi-SQEMU_QEMU_HARDDISK_434f30f3-358f-4fc6-b78f-2729b9d7ead8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909899 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef-osd--block--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef', 'dm-uuid-LVM-7M310GiLltp8Vnz2csjtNJlk1L0vv4Cmofoy1KrAV9BQUSRNMmV5gY2W3vALnew7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d1d2d931--4931--5db0--ab2b--78163b321b95-osd--block--d1d2d931--4931--5db0--ab2b--78163b321b95'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d8BQ8X-O0q2-8Dr6-D6Yo-uk3T-aAHA-v24QNd', 'scsi-0QEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb', 'scsi-SQEMU_QEMU_HARDDISK_39cbb32c-70e6-49f5-af10-21eee2e163cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909929 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909939 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf-osd--block--9b8df0c9--f78d--5986--b3c0--5cd01f50e7bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4KnCTR-rOfZ-7W2z-L9Zo-9C3G-Tf5Y-xbkafe', 'scsi-0QEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515', 'scsi-SQEMU_QEMU_HARDDISK_4d1e5713-0224-44c3-b7e5-3534e8d42515'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf', 'scsi-SQEMU_QEMU_HARDDISK_20b388ad-01dd-4aa6-8ed7-7c845856acdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909969 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909983 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.909998 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910007 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.910062 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910080 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910088 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4d57ac9--6a13--5c0f--8510--006bddd85eb9-osd--block--a4d57ac9--6a13--5c0f--8510--006bddd85eb9', 'dm-uuid-LVM-hcC0V33nfWhvavdx6bYTOrPejhCRsXT6McptdcU8XwLHX58F9r79eZ6RUgC13lY9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910105 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910124 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3a96b51d--dfb4--5025--9237--a7295cf21147-osd--block--3a96b51d--dfb4--5025--9237--a7295cf21147', 'dm-uuid-LVM-rtlyMB2dCL1xBVCjxdFgUZQe34RzKl2yj8TH7muRKMAgiZ4tsixX4NUg0uUmJifx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910134 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d0a4f68-2481-476d-ba91-058fec8c2e3c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910148 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fb51feb3--1d35--54b5--8ae7--a768510de0c7-osd--block--fb51feb3--1d35--54b5--8ae7--a768510de0c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NteD8s-bHnx-ob7m-Nrex-AzPQ-gcuO-xNLEZR', 'scsi-0QEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122', 'scsi-SQEMU_QEMU_HARDDISK_de0fd659-da77-4142-b79c-576ab421e122'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910165 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910174 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef-osd--block--dcdeb624--ddaf--57fa--a1ad--1b7c62a928ef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8blfqt-AMNX-egei-T1WZ-gYXs-o9ZA-RuO3Jz', 'scsi-0QEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86', 'scsi-SQEMU_QEMU_HARDDISK_76bcb63e-af93-4dd0-8322-d7e595f79a86'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910188 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910196 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3', 'scsi-SQEMU_QEMU_HARDDISK_5a98e6dc-3aee-4304-8fec-4f13a362fcb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910205 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910217 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910226 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.910240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910253 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910262 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910270 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910279 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910298 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7126eb4-3df9-4966-9c87-58f79188c0e4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910314 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a4d57ac9--6a13--5c0f--8510--006bddd85eb9-osd--block--a4d57ac9--6a13--5c0f--8510--006bddd85eb9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hYwwY6-iOtQ-zGma-oaom-t0s8-TBDp-jDC4Fj', 'scsi-0QEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab', 'scsi-SQEMU_QEMU_HARDDISK_c9c8c13d-3654-44d4-a582-9918730b81ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910323 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3a96b51d--dfb4--5025--9237--a7295cf21147-osd--block--3a96b51d--dfb4--5025--9237--a7295cf21147'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Nxnuu3-Xqm4-ppZR-n7Hk-J8xe-yIYD-Rl9W16', 'scsi-0QEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c', 'scsi-SQEMU_QEMU_HARDDISK_915f467d-f7f4-452d-8987-30de089be54c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910331 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f', 'scsi-SQEMU_QEMU_HARDDISK_caf45da1-cd9a-4b0f-9c1b-eed918b4e62f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910367 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 01:04:03.910382 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.910390 | orchestrator | 2026-01-01 01:04:03.910399 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-01 01:04:03.910407 | orchestrator | Thursday 01 January 2026 01:02:07 +0000 (0:00:00.601) 0:00:18.192 ****** 2026-01-01 01:04:03.910416 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.910424 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.910524 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.910534 | orchestrator | 2026-01-01 01:04:03.910542 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-01 01:04:03.910551 | orchestrator | Thursday 01 January 2026 01:02:08 +0000 (0:00:00.748) 0:00:18.941 ****** 2026-01-01 01:04:03.910559 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.910620 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.910631 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.910639 | orchestrator | 2026-01-01 01:04:03.910647 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-01 01:04:03.910655 | orchestrator | Thursday 01 January 2026 01:02:09 +0000 (0:00:00.549) 0:00:19.491 ****** 2026-01-01 01:04:03.910663 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.910671 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.910679 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.910687 | orchestrator | 2026-01-01 01:04:03.910695 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-01 01:04:03.910703 | orchestrator | Thursday 01 January 2026 01:02:09 +0000 (0:00:00.665) 0:00:20.156 ****** 2026-01-01 01:04:03.910711 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.910815 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.910833 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.910846 | orchestrator | 2026-01-01 01:04:03.910860 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-01 01:04:03.910874 | orchestrator | Thursday 01 January 2026 01:02:09 +0000 (0:00:00.291) 0:00:20.448 ****** 2026-01-01 01:04:03.910887 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.910900 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.910913 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.910922 | orchestrator | 2026-01-01 01:04:03.910930 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-01 01:04:03.910938 | orchestrator | Thursday 01 January 2026 01:02:10 +0000 (0:00:00.541) 0:00:20.989 ****** 2026-01-01 01:04:03.910946 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.910954 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.910962 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.910970 | orchestrator | 2026-01-01 01:04:03.910978 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-01 01:04:03.910986 | orchestrator | Thursday 01 January 2026 01:02:11 +0000 (0:00:00.538) 0:00:21.528 ****** 2026-01-01 01:04:03.910994 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-01 01:04:03.911002 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-01 01:04:03.911010 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-01 01:04:03.911018 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-01 01:04:03.911026 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-01 01:04:03.911034 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-01 01:04:03.911042 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-01 01:04:03.911052 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-01 01:04:03.911077 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-01 01:04:03.911091 | orchestrator | 2026-01-01 01:04:03.911105 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-01 01:04:03.911118 | orchestrator | Thursday 01 January 2026 01:02:11 +0000 (0:00:00.852) 0:00:22.381 ****** 2026-01-01 01:04:03.911132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-01 01:04:03.911141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-01 01:04:03.911149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-01 01:04:03.911156 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.911164 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-01 01:04:03.911172 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-01 01:04:03.911180 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-01 01:04:03.911188 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.911196 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-01 01:04:03.911204 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-01 01:04:03.911212 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-01 01:04:03.911219 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.911227 | orchestrator | 2026-01-01 01:04:03.911235 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-01 01:04:03.911243 | orchestrator | Thursday 01 January 2026 01:02:12 +0000 (0:00:00.391) 0:00:22.772 ****** 2026-01-01 01:04:03.911258 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:04:03.911266 | orchestrator | 2026-01-01 01:04:03.911274 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-01 01:04:03.911284 | orchestrator | Thursday 01 January 2026 01:02:13 +0000 (0:00:00.740) 0:00:23.512 ****** 2026-01-01 01:04:03.911300 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.911308 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.911316 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.911324 | orchestrator | 2026-01-01 01:04:03.911332 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-01 01:04:03.911340 | orchestrator | Thursday 01 January 2026 01:02:13 +0000 (0:00:00.357) 0:00:23.870 ****** 2026-01-01 01:04:03.911376 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.911385 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.911393 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.911401 | orchestrator | 2026-01-01 01:04:03.911409 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-01 01:04:03.911417 | orchestrator | Thursday 01 January 2026 01:02:13 +0000 (0:00:00.343) 0:00:24.213 ****** 2026-01-01 01:04:03.911425 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.911432 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.911440 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:04:03.911448 | orchestrator | 2026-01-01 01:04:03.911456 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-01 01:04:03.911464 | orchestrator | Thursday 01 January 2026 01:02:14 +0000 (0:00:00.342) 0:00:24.556 ****** 2026-01-01 01:04:03.911472 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.911480 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.911488 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.911496 | orchestrator | 2026-01-01 01:04:03.911504 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-01 01:04:03.911512 | orchestrator | Thursday 01 January 2026 01:02:14 +0000 (0:00:00.692) 0:00:25.249 ****** 2026-01-01 01:04:03.911519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:04:03.911527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:04:03.911543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:04:03.911551 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.911559 | orchestrator | 2026-01-01 01:04:03.911567 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-01 01:04:03.911575 | orchestrator | Thursday 01 January 2026 01:02:15 +0000 (0:00:00.535) 0:00:25.784 ****** 2026-01-01 01:04:03.911583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:04:03.911591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:04:03.911599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:04:03.911607 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.911614 | orchestrator | 2026-01-01 01:04:03.911622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-01 01:04:03.911630 | orchestrator | Thursday 01 January 2026 01:02:15 +0000 (0:00:00.421) 0:00:26.206 ****** 2026-01-01 01:04:03.911638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 01:04:03.911646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 01:04:03.911655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 01:04:03.911662 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.911670 | orchestrator | 2026-01-01 01:04:03.911678 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-01 01:04:03.911686 | orchestrator | Thursday 01 January 2026 01:02:16 +0000 (0:00:00.408) 0:00:26.614 ****** 2026-01-01 01:04:03.911694 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:04:03.911702 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:04:03.911710 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:04:03.911718 | orchestrator | 2026-01-01 01:04:03.911726 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-01 01:04:03.911734 | orchestrator | Thursday 01 January 2026 01:02:16 +0000 (0:00:00.350) 0:00:26.965 ****** 2026-01-01 01:04:03.911742 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-01 01:04:03.911750 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-01 01:04:03.911758 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-01 01:04:03.911766 | orchestrator | 2026-01-01 01:04:03.911774 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-01 01:04:03.911782 | orchestrator | Thursday 01 January 2026 01:02:17 +0000 (0:00:00.522) 0:00:27.487 ****** 2026-01-01 01:04:03.911790 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 01:04:03.911798 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 01:04:03.911805 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 01:04:03.911813 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-01 01:04:03.911821 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-01 01:04:03.911829 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-01 01:04:03.911837 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-01 01:04:03.911845 | orchestrator | 2026-01-01 01:04:03.911853 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-01 01:04:03.911861 | orchestrator | Thursday 01 January 2026 01:02:18 +0000 (0:00:01.069) 0:00:28.557 ****** 2026-01-01 01:04:03.911869 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 01:04:03.911877 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 01:04:03.911893 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 01:04:03.911901 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-01 01:04:03.911909 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-01 01:04:03.911922 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-01 01:04:03.911934 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-01 01:04:03.911943 | orchestrator | 2026-01-01 01:04:03.911951 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-01 01:04:03.911963 | orchestrator | Thursday 01 January 2026 01:02:20 +0000 (0:00:02.195) 0:00:30.752 ****** 2026-01-01 01:04:03.911977 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:04:03.911989 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:04:03.912002 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-01 01:04:03.912015 | orchestrator | 2026-01-01 01:04:03.912027 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-01 01:04:03.912041 | orchestrator | Thursday 01 January 2026 01:02:20 +0000 (0:00:00.396) 0:00:31.149 ****** 2026-01-01 01:04:03.912056 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 01:04:03.912071 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 01:04:03.912085 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 01:04:03.912094 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 01:04:03.912102 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 01:04:03.912110 | orchestrator | 2026-01-01 01:04:03.912122 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-01 01:04:03.912136 | orchestrator | Thursday 01 January 2026 01:03:06 +0000 (0:00:45.903) 0:01:17.052 ****** 2026-01-01 01:04:03.912150 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912164 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912177 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912191 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912199 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912207 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912215 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-01 01:04:03.912223 | orchestrator | 2026-01-01 01:04:03.912231 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-01 01:04:03.912239 | orchestrator | Thursday 01 January 2026 01:03:31 +0000 (0:00:25.264) 0:01:42.316 ****** 2026-01-01 01:04:03.912247 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912255 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912269 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912277 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912285 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912293 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912301 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 01:04:03.912308 | orchestrator | 2026-01-01 01:04:03.912316 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-01 01:04:03.912324 | orchestrator | Thursday 01 January 2026 01:03:43 +0000 (0:00:11.179) 0:01:53.496 ****** 2026-01-01 01:04:03.912332 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912345 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 01:04:03.912405 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 01:04:03.912413 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912421 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 01:04:03.912436 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 01:04:03.912444 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912452 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 01:04:03.912460 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 01:04:03.912468 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912476 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 01:04:03.912484 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 01:04:03.912492 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912500 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 01:04:03.912508 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 01:04:03.912516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 01:04:03.912524 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 01:04:03.912532 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 01:04:03.912540 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-01 01:04:03.912548 | orchestrator | 2026-01-01 01:04:03.912556 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:04:03.912564 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-01 01:04:03.912574 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-01 01:04:03.912582 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-01 01:04:03.912590 | orchestrator | 2026-01-01 01:04:03.912598 | orchestrator | 2026-01-01 01:04:03.912606 | orchestrator | 2026-01-01 01:04:03.912614 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:04:03.912622 | orchestrator | Thursday 01 January 2026 01:04:00 +0000 (0:00:17.892) 0:02:11.389 ****** 2026-01-01 01:04:03.912630 | orchestrator | =============================================================================== 2026-01-01 01:04:03.912645 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.90s 2026-01-01 01:04:03.912653 | orchestrator | generate keys ---------------------------------------------------------- 25.26s 2026-01-01 01:04:03.912661 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.89s 2026-01-01 01:04:03.912669 | orchestrator | get keys from monitors ------------------------------------------------- 11.18s 2026-01-01 01:04:03.912677 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.22s 2026-01-01 01:04:03.912685 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.20s 2026-01-01 01:04:03.912692 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.92s 2026-01-01 01:04:03.912700 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.07s 2026-01-01 01:04:03.912708 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.89s 2026-01-01 01:04:03.912716 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2026-01-01 01:04:03.912724 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.84s 2026-01-01 01:04:03.912732 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.75s 2026-01-01 01:04:03.912740 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.74s 2026-01-01 01:04:03.912748 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.69s 2026-01-01 01:04:03.912756 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.68s 2026-01-01 01:04:03.912764 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2026-01-01 01:04:03.912772 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2026-01-01 01:04:03.912781 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.66s 2026-01-01 01:04:03.912795 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2026-01-01 01:04:03.912807 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.60s 2026-01-01 01:04:03.912821 | orchestrator | 2026-01-01 01:04:03 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:03.912834 | orchestrator | 2026-01-01 01:04:03 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:04:03.912852 | orchestrator | 2026-01-01 01:04:03 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:03.912870 | orchestrator | 2026-01-01 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:06.963090 | orchestrator | 2026-01-01 01:04:06 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:06.965542 | orchestrator | 2026-01-01 01:04:06 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:04:06.968667 | orchestrator | 2026-01-01 01:04:06 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:06.968756 | orchestrator | 2026-01-01 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:10.014372 | orchestrator | 2026-01-01 01:04:10 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:10.018583 | orchestrator | 2026-01-01 01:04:10 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:04:10.018642 | orchestrator | 2026-01-01 01:04:10 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:10.018656 | orchestrator | 2026-01-01 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:13.063939 | orchestrator | 2026-01-01 01:04:13 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:13.064945 | orchestrator | 2026-01-01 01:04:13 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:04:13.066085 | orchestrator | 2026-01-01 01:04:13 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:13.066210 | orchestrator | 2026-01-01 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:16.111938 | orchestrator | 2026-01-01 01:04:16 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:16.112823 | orchestrator | 2026-01-01 01:04:16 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:04:16.115369 | orchestrator | 2026-01-01 01:04:16 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:16.115390 | orchestrator | 2026-01-01 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:19.142356 | orchestrator | 2026-01-01 01:04:19 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:19.144813 | orchestrator | 2026-01-01 01:04:19 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:04:19.146673 | orchestrator | 2026-01-01 01:04:19 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:19.146784 | orchestrator | 2026-01-01 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:22.203864 | orchestrator | 2026-01-01 01:04:22 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:22.204570 | orchestrator | 2026-01-01 01:04:22 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:04:22.207155 | orchestrator | 2026-01-01 01:04:22 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:22.207212 | orchestrator | 2026-01-01 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:25.253544 | orchestrator | 2026-01-01 01:04:25 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:25.259757 | orchestrator | 2026-01-01 01:04:25 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:04:25.263399 | orchestrator | 2026-01-01 01:04:25 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:25.264499 | orchestrator | 2026-01-01 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:28.315968 | orchestrator | 2026-01-01 01:04:28 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:28.317773 | orchestrator | 2026-01-01 01:04:28 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state STARTED 2026-01-01 01:04:28.320764 | orchestrator | 2026-01-01 01:04:28 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:28.320844 | orchestrator | 2026-01-01 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:31.361081 | orchestrator | 2026-01-01 01:04:31 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:04:31.361536 | orchestrator | 2026-01-01 01:04:31 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:04:31.362854 | orchestrator | 2026-01-01 01:04:31 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:31.368617 | orchestrator | 2026-01-01 01:04:31.368656 | orchestrator | 2026-01-01 01:04:31 | INFO  | Task 59209a37-1527-4b5c-bd6e-cf2b6f388896 is in state SUCCESS 2026-01-01 01:04:31.370837 | orchestrator | 2026-01-01 01:04:31.370895 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:04:31.370908 | orchestrator | 2026-01-01 01:04:31.370919 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:04:31.370956 | orchestrator | Thursday 01 January 2026 01:03:26 +0000 (0:00:00.229) 0:00:00.229 ****** 2026-01-01 01:04:31.370967 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:04:31.370978 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:04:31.370988 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:04:31.370998 | orchestrator | 2026-01-01 01:04:31.371008 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:04:31.371018 | orchestrator | Thursday 01 January 2026 01:03:26 +0000 (0:00:00.249) 0:00:00.478 ****** 2026-01-01 01:04:31.371028 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-01 01:04:31.371038 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-01 01:04:31.371048 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-01 01:04:31.371058 | orchestrator | 2026-01-01 01:04:31.371068 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-01 01:04:31.371078 | orchestrator | 2026-01-01 01:04:31.371088 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-01 01:04:31.371098 | orchestrator | Thursday 01 January 2026 01:03:26 +0000 (0:00:00.366) 0:00:00.845 ****** 2026-01-01 01:04:31.371108 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:04:31.371118 | orchestrator | 2026-01-01 01:04:31.371128 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-01 01:04:31.371138 | orchestrator | Thursday 01 January 2026 01:03:27 +0000 (0:00:00.483) 0:00:01.328 ****** 2026-01-01 01:04:31.371154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.371170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.371210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.371231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.371242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.371253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.371264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.371275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.371290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.371307 | orchestrator | 2026-01-01 01:04:31.371317 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-01 01:04:31.371362 | orchestrator | Thursday 01 January 2026 01:03:29 +0000 (0:00:01.887) 0:00:03.216 ****** 2026-01-01 01:04:31.373172 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.373220 | orchestrator | 2026-01-01 01:04:31.373231 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-01 01:04:31.373242 | orchestrator | Thursday 01 January 2026 01:03:29 +0000 (0:00:00.134) 0:00:03.351 ****** 2026-01-01 01:04:31.373252 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.373262 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.373272 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.373281 | orchestrator | 2026-01-01 01:04:31.373292 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-01 01:04:31.373301 | orchestrator | Thursday 01 January 2026 01:03:29 +0000 (0:00:00.452) 0:00:03.803 ****** 2026-01-01 01:04:31.373311 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 01:04:31.373321 | orchestrator | 2026-01-01 01:04:31.373399 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-01 01:04:31.373412 | orchestrator | Thursday 01 January 2026 01:03:30 +0000 (0:00:00.827) 0:00:04.630 ****** 2026-01-01 01:04:31.373422 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:04:31.373431 | orchestrator | 2026-01-01 01:04:31.373438 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-01 01:04:31.373446 | orchestrator | Thursday 01 January 2026 01:03:30 +0000 (0:00:00.519) 0:00:05.150 ****** 2026-01-01 01:04:31.373457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.373469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.373540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.373553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.373562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.373570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.373580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.373589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.373602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.373611 | orchestrator | 2026-01-01 01:04:31.373622 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-01 01:04:31.373631 | orchestrator | Thursday 01 January 2026 01:03:34 +0000 (0:00:03.126) 0:00:08.276 ****** 2026-01-01 01:04:31.373664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.373675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.373683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.373692 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.373701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.373715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.373750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.373760 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.373769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.373780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.373790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.373804 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.373814 | orchestrator | 2026-01-01 01:04:31.373823 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-01 01:04:31.373832 | orchestrator | Thursday 01 January 2026 01:03:34 +0000 (0:00:00.521) 0:00:08.797 ****** 2026-01-01 01:04:31.373846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.373878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.373890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.373899 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.373909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.373924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.373935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.373945 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.373981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.373993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.374003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.374073 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.374093 | orchestrator | 2026-01-01 01:04:31.374104 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-01 01:04:31.374112 | orchestrator | Thursday 01 January 2026 01:03:35 +0000 (0:00:00.733) 0:00:09.531 ****** 2026-01-01 01:04:31.374121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.374134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.374170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.374180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.374194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.374203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.374211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.374223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.374252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.374261 | orchestrator | 2026-01-01 01:04:31.374270 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-01 01:04:31.374278 | orchestrator | Thursday 01 January 2026 01:03:38 +0000 (0:00:03.201) 0:00:12.733 ****** 2026-01-01 01:04:31.374286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.374304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.374313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.374326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.374381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.374391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.374405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.374413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.374422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.374430 | orchestrator | 2026-01-01 01:04:31.374438 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-01 01:04:31.374446 | orchestrator | Thursday 01 January 2026 01:03:44 +0000 (0:00:05.801) 0:00:18.535 ****** 2026-01-01 01:04:31.374454 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:04:31.374462 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:04:31.374470 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:04:31.374478 | orchestrator | 2026-01-01 01:04:31.374486 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-01 01:04:31.374499 | orchestrator | Thursday 01 January 2026 01:03:45 +0000 (0:00:01.437) 0:00:19.972 ****** 2026-01-01 01:04:31.374507 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.374515 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.374523 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.374531 | orchestrator | 2026-01-01 01:04:31.374539 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-01 01:04:31.374568 | orchestrator | Thursday 01 January 2026 01:03:46 +0000 (0:00:00.611) 0:00:20.584 ****** 2026-01-01 01:04:31.374577 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.374585 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.374593 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.374601 | orchestrator | 2026-01-01 01:04:31.374609 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-01 01:04:31.374617 | orchestrator | Thursday 01 January 2026 01:03:46 +0000 (0:00:00.325) 0:00:20.910 ****** 2026-01-01 01:04:31.374630 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.374638 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.374646 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.374654 | orchestrator | 2026-01-01 01:04:31.374662 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-01 01:04:31.374670 | orchestrator | Thursday 01 January 2026 01:03:47 +0000 (0:00:00.565) 0:00:21.475 ****** 2026-01-01 01:04:31.374679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.374688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.374696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.374705 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.374717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.374749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.374764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.374773 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.374781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.374790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.374799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.374807 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.374815 | orchestrator | 2026-01-01 01:04:31.374823 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-01 01:04:31.374831 | orchestrator | Thursday 01 January 2026 01:03:47 +0000 (0:00:00.610) 0:00:22.086 ****** 2026-01-01 01:04:31.374839 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.374847 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.374854 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.374867 | orchestrator | 2026-01-01 01:04:31.374878 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-01 01:04:31.374886 | orchestrator | Thursday 01 January 2026 01:03:48 +0000 (0:00:00.304) 0:00:22.391 ****** 2026-01-01 01:04:31.374895 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-01 01:04:31.374924 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-01 01:04:31.374933 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-01 01:04:31.374941 | orchestrator | 2026-01-01 01:04:31.374950 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-01 01:04:31.374958 | orchestrator | Thursday 01 January 2026 01:03:50 +0000 (0:00:01.869) 0:00:24.261 ****** 2026-01-01 01:04:31.374966 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 01:04:31.374974 | orchestrator | 2026-01-01 01:04:31.374982 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-01 01:04:31.374990 | orchestrator | Thursday 01 January 2026 01:03:51 +0000 (0:00:00.935) 0:00:25.196 ****** 2026-01-01 01:04:31.374998 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.375006 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.375013 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.375021 | orchestrator | 2026-01-01 01:04:31.375029 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-01 01:04:31.375037 | orchestrator | Thursday 01 January 2026 01:03:51 +0000 (0:00:00.677) 0:00:25.874 ****** 2026-01-01 01:04:31.375045 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 01:04:31.375053 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-01 01:04:31.375061 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-01 01:04:31.375069 | orchestrator | 2026-01-01 01:04:31.375077 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-01 01:04:31.375085 | orchestrator | Thursday 01 January 2026 01:03:52 +0000 (0:00:01.072) 0:00:26.946 ****** 2026-01-01 01:04:31.375093 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:04:31.375101 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:04:31.375109 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:04:31.375117 | orchestrator | 2026-01-01 01:04:31.375124 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-01 01:04:31.375132 | orchestrator | Thursday 01 January 2026 01:03:53 +0000 (0:00:00.283) 0:00:27.230 ****** 2026-01-01 01:04:31.375140 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-01 01:04:31.375148 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-01 01:04:31.375156 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-01 01:04:31.375164 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-01 01:04:31.375172 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-01 01:04:31.375180 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-01 01:04:31.375188 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-01 01:04:31.375196 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-01 01:04:31.375204 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-01 01:04:31.375212 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-01 01:04:31.375220 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-01 01:04:31.375227 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-01 01:04:31.375240 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-01 01:04:31.375249 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-01 01:04:31.375257 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-01 01:04:31.375265 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-01 01:04:31.375273 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-01 01:04:31.375281 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-01 01:04:31.375288 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-01 01:04:31.375296 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-01 01:04:31.375304 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-01 01:04:31.375312 | orchestrator | 2026-01-01 01:04:31.375320 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-01 01:04:31.375346 | orchestrator | Thursday 01 January 2026 01:04:02 +0000 (0:00:09.067) 0:00:36.298 ****** 2026-01-01 01:04:31.375361 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-01 01:04:31.375375 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-01 01:04:31.375393 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-01 01:04:31.375406 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-01 01:04:31.375440 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-01 01:04:31.375450 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-01 01:04:31.375457 | orchestrator | 2026-01-01 01:04:31.375465 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-01-01 01:04:31.375473 | orchestrator | Thursday 01 January 2026 01:04:04 +0000 (0:00:02.866) 0:00:39.164 ****** 2026-01-01 01:04:31.375482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.375492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.375507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-01 01:04:31.375524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.375534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.375542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 01:04:31.375550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.375565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.375573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 01:04:31.375581 | orchestrator | 2026-01-01 01:04:31.375589 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-01-01 01:04:31.375597 | orchestrator | Thursday 01 January 2026 01:04:07 +0000 (0:00:02.800) 0:00:41.965 ****** 2026-01-01 01:04:31.375606 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 01:04:31.375614 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:04:31.375622 | orchestrator | } 2026-01-01 01:04:31.375630 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 01:04:31.375638 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:04:31.375646 | orchestrator | } 2026-01-01 01:04:31.375654 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 01:04:31.375662 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:04:31.375670 | orchestrator | } 2026-01-01 01:04:31.375677 | orchestrator | 2026-01-01 01:04:31.375685 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 01:04:31.375693 | orchestrator | Thursday 01 January 2026 01:04:08 +0000 (0:00:00.347) 0:00:42.312 ****** 2026-01-01 01:04:31.375713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.375723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.375739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.375748 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.375757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.375765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.375781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.375790 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.375799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-01 01:04:31.375813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 01:04:31.375821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 01:04:31.375830 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.375838 | orchestrator | 2026-01-01 01:04:31.375846 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-01 01:04:31.375854 | orchestrator | Thursday 01 January 2026 01:04:09 +0000 (0:00:00.916) 0:00:43.229 ****** 2026-01-01 01:04:31.375862 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.375869 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.375877 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.375885 | orchestrator | 2026-01-01 01:04:31.375893 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-01 01:04:31.375901 | orchestrator | Thursday 01 January 2026 01:04:09 +0000 (0:00:00.286) 0:00:43.515 ****** 2026-01-01 01:04:31.375909 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:04:31.375917 | orchestrator | 2026-01-01 01:04:31.375924 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-01 01:04:31.375932 | orchestrator | Thursday 01 January 2026 01:04:11 +0000 (0:00:02.554) 0:00:46.070 ****** 2026-01-01 01:04:31.375940 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:04:31.375948 | orchestrator | 2026-01-01 01:04:31.375956 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-01 01:04:31.375964 | orchestrator | Thursday 01 January 2026 01:04:14 +0000 (0:00:02.602) 0:00:48.673 ****** 2026-01-01 01:04:31.375972 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:04:31.375979 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:04:31.375987 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:04:31.375995 | orchestrator | 2026-01-01 01:04:31.376003 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-01 01:04:31.376011 | orchestrator | Thursday 01 January 2026 01:04:15 +0000 (0:00:01.163) 0:00:49.836 ****** 2026-01-01 01:04:31.376018 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:04:31.376026 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:04:31.376034 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:04:31.376042 | orchestrator | 2026-01-01 01:04:31.376050 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-01 01:04:31.376058 | orchestrator | Thursday 01 January 2026 01:04:15 +0000 (0:00:00.303) 0:00:50.139 ****** 2026-01-01 01:04:31.376069 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:04:31.376077 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:04:31.376085 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:04:31.376093 | orchestrator | 2026-01-01 01:04:31.376101 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-01 01:04:31.376117 | orchestrator | Thursday 01 January 2026 01:04:16 +0000 (0:00:00.459) 0:00:50.598 ****** 2026-01-01 01:04:31.376273 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1", "rc": 1, "stderr": "+ sudo -E kolla_set_configs\n2026-01-01 01:04:18.088 INFO Loading config file at /var/lib/kolla/config_files/config.json\n2026-01-01 01:04:18.088 INFO Validating config file\n2026-01-01 01:04:18.088 INFO Kolla config strategy set to: COPY_ALWAYS\n2026-01-01 01:04:18.094 INFO Copying service configuration files\n2026-01-01 01:04:18.094 INFO Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh\n2026-01-01 01:04:18.102 INFO Setting permission for /usr/bin/keystone-startup.sh\n2026-01-01 01:04:18.103 INFO Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf\n2026-01-01 01:04:18.103 INFO Setting permission for /etc/keystone/keystone.conf\n2026-01-01 01:04:18.104 INFO Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf\n2026-01-01 01:04:18.111 INFO Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf\n2026-01-01 01:04:18.112 INFO Creating directory /var/lib/kolla/share/ca-certificates\n2026-01-01 01:04:18.113 INFO Setting permission for /var/lib/kolla/share/ca-certificates\n2026-01-01 01:04:18.113 INFO Copying /var/lib/kolla/config_files/ca-certificates/testbed.crt to /var/lib/kolla/share/ca-certificates/testbed.crt\n2026-01-01 01:04:18.114 INFO Setting permission for /var/lib/kolla/share/ca-certificates/testbed.crt\n2026-01-01 01:04:18.114 INFO Writing out command to execute\n2026-01-01 01:04:18.114 INFO Setting permission for /var/log/kolla\n2026-01-01 01:04:18.115 INFO Setting permission for /etc/keystone/fernet-keys\n++ cat /run_command\n+ CMD=/usr/bin/keystone-startup.sh\n+ ARGS=\n+ sudo kolla_copy_cacerts\nrehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL\n+ sudo kolla_install_projects\n+ [[ ! -n '' ]]\n+ . kolla_extend_start\n++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone\n++ [[ ! -d /var/log/kolla/keystone ]]\n++ mkdir -p /var/log/kolla/keystone\n+++ stat -c %U:%G /var/log/kolla/keystone\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]\n++ chown keystone:kolla /var/log/kolla/keystone\n++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'\n++ touch /var/log/kolla/keystone/keystone.log\n+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]\n++ chown keystone:keystone /var/log/kolla/keystone/keystone.log\n+++ stat -c %a /var/log/kolla/keystone\n++ [[ 2755 != \\7\\5\\5 ]]\n++ chmod 755 /var/log/kolla/keystone\n++ EXTRA_KEYSTONE_MANAGE_ARGS=\n++ [[ -n '' ]]\n++ [[ -n '' ]]\n++ [[ -n 0 ]]\n++ sudo -H -u keystone keystone-manage db_sync\n2026-01-01 01:04:28.149 1081 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:397\n2026-01-01 01:04:28.157 1081 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-01 01:04:28.157 1081 ERROR keystone Traceback (most recent call last):\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-01 01:04:28.157 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection\n2026-01-01 01:04:28.157 1081 ERROR keystone return self.pool.connect()\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-01 01:04:28.157 1081 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-01 01:04:28.157 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-01 01:04:28.157 1081 ERROR keystone rec = pool._do_get()\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-01 01:04:28.157 1081 ERROR keystone with util.safe_reraise():\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-01 01:04:28.157 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-01 01:04:28.157 1081 ERROR keystone return self._create_connection()\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-01 01:04:28.157 1081 ERROR keystone return _ConnectionRecord(self)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-01 01:04:28.157 1081 ERROR keystone self.__connect()\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-01 01:04:28.157 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-01 01:04:28.157 1081 ERROR keystone self(*args, **kw)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-01 01:04:28.157 1081 ERROR keystone fn(*args, **kw)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go\n2026-01-01 01:04:28.157 1081 ERROR keystone return once_fn(*arg, **kw)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect\n2026-01-01 01:04:28.157 1081 ERROR keystone dialect.initialize(c)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize\n2026-01-01 01:04:28.157 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize\n2026-01-01 01:04:28.157 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level\n2026-01-01 01:04:28.157 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level\n2026-01-01 01:04:28.157 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-01 01:04:28.157 1081 ERROR keystone result = self._query(query)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-01 01:04:28.157 1081 ERROR keystone conn.query(q)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-01 01:04:28.157 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-01 01:04:28.157 1081 ERROR keystone result.read()\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-01 01:04:28.157 1081 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-01 01:04:28.157 1081 ERROR keystone packet.raise_for_error()\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-01 01:04:28.157 1081 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-01 01:04:28.157 1081 ERROR keystone raise errorclass(errno, errval)\n2026-01-01 01:04:28.157 1081 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-01 01:04:28.157 1081 ERROR keystone \n2026-01-01 01:04:28.157 1081 ERROR keystone The above exception was the direct cause of the following exception:\n2026-01-01 01:04:28.157 1081 ERROR keystone \n2026-01-01 01:04:28.157 1081 ERROR keystone Traceback (most recent call last):\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in \n2026-01-01 01:04:28.157 1081 ERROR keystone sys.exit(main())\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main\n2026-01-01 01:04:28.157 1081 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1727, in main\n2026-01-01 01:04:28.157 1081 ERROR keystone CONF.command.cmd_class.main()\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 492, in main\n2026-01-01 01:04:28.157 1081 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 321, in offline_sync_database_to_version\n2026-01-01 01:04:28.157 1081 ERROR keystone _db_sync(engine=engine)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 210, in _db_sync\n2026-01-01 01:04:28.157 1081 ERROR keystone with sql.session_for_write() as session:\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-01 01:04:28.157 1081 ERROR keystone return next(self.gen)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1199, in _transaction_scope\n2026-01-01 01:04:28.157 1081 ERROR keystone with current._produce_block(\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-01 01:04:28.157 1081 ERROR keystone return next(self.gen)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 841, in _session\n2026-01-01 01:04:28.157 1081 ERROR keystone self.session = self.factory._create_session(\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 459, in _create_session\n2026-01-01 01:04:28.157 1081 ERROR keystone self._start()\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 530, in _start\n2026-01-01 01:04:28.157 1081 ERROR keystone self._setup_for_connection(\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 647, in _setup_for_connection\n2026-01-01 01:04:28.157 1081 ERROR keystone engine = engines.create_engine(\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator\n2026-01-01 01:04:28.157 1081 ERROR keystone return wrapped(*args, **kwargs)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 271, in create_engine\n2026-01-01 01:04:28.157 1081 ERROR keystone _test_connection(engine_event_target, max_retries, retry_interval)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 169, in _test_connection\n2026-01-01 01:04:28.157 1081 ERROR keystone conn = engine.connect()\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3274, in connect\n2026-01-01 01:04:28.157 1081 ERROR keystone return self._connection_cls(self)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__\n2026-01-01 01:04:28.157 1081 ERROR keystone Connection._handle_dbapi_exception_noconnection(\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2436, in _handle_dbapi_exception_noconnection\n2026-01-01 01:04:28.157 1081 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-01 01:04:28.157 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection\n2026-01-01 01:04:28.157 1081 ERROR keystone return self.pool.connect()\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-01 01:04:28.157 1081 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-01 01:04:28.157 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-01 01:04:28.157 1081 ERROR keystone rec = pool._do_get()\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-01 01:04:28.157 1081 ERROR keystone with util.safe_reraise():\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-01 01:04:28.157 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-01 01:04:28.157 1081 ERROR keystone return self._create_connection()\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-01 01:04:28.157 1081 ERROR keystone return _ConnectionRecord(self)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-01 01:04:28.157 1081 ERROR keystone self.__connect()\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-01 01:04:28.157 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-01 01:04:28.157 1081 ERROR keystone self(*args, **kw)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-01 01:04:28.157 1081 ERROR keystone fn(*args, **kw)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go\n2026-01-01 01:04:28.157 1081 ERROR keystone return once_fn(*arg, **kw)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect\n2026-01-01 01:04:28.157 1081 ERROR keystone dialect.initialize(c)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize\n2026-01-01 01:04:28.157 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize\n2026-01-01 01:04:28.157 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level\n2026-01-01 01:04:28.157 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level\n2026-01-01 01:04:28.157 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-01 01:04:28.157 1081 ERROR keystone result = self._query(query)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-01 01:04:28.157 1081 ERROR keystone conn.query(q)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-01 01:04:28.157 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-01 01:04:28.157 1081 ERROR keystone result.read()\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-01 01:04:28.157 1081 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-01 01:04:28.157 1081 ERROR keystone packet.raise_for_error()\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-01 01:04:28.157 1081 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-01 01:04:28.157 1081 ERROR keystone raise errorclass(errno, errval)\n2026-01-01 01:04:28.157 1081 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-01 01:04:28.157 1081 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-01 01:04:28.157 1081 ERROR keystone \n", "stderr_lines": ["+ sudo -E kolla_set_configs", "2026-01-01 01:04:18.088 INFO Loading config file at /var/lib/kolla/config_files/config.json", "2026-01-01 01:04:18.088 INFO Validating config file", "2026-01-01 01:04:18.088 INFO Kolla config strategy set to: COPY_ALWAYS", "2026-01-01 01:04:18.094 INFO Copying service configuration files", "2026-01-01 01:04:18.094 INFO Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh", "2026-01-01 01:04:18.102 INFO Setting permission for /usr/bin/keystone-startup.sh", "2026-01-01 01:04:18.103 INFO Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf", "2026-01-01 01:04:18.103 INFO Setting permission for /etc/keystone/keystone.conf", "2026-01-01 01:04:18.104 INFO Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf", "2026-01-01 01:04:18.111 INFO Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf", "2026-01-01 01:04:18.112 INFO Creating directory /var/lib/kolla/share/ca-certificates", "2026-01-01 01:04:18.113 INFO Setting permission for /var/lib/kolla/share/ca-certificates", "2026-01-01 01:04:18.113 INFO Copying /var/lib/kolla/config_files/ca-certificates/testbed.crt to /var/lib/kolla/share/ca-certificates/testbed.crt", "2026-01-01 01:04:18.114 INFO Setting permission for /var/lib/kolla/share/ca-certificates/testbed.crt", "2026-01-01 01:04:18.114 INFO Writing out command to execute", "2026-01-01 01:04:18.114 INFO Setting permission for /var/log/kolla", "2026-01-01 01:04:18.115 INFO Setting permission for /etc/keystone/fernet-keys", "++ cat /run_command", "+ CMD=/usr/bin/keystone-startup.sh", "+ ARGS=", "+ sudo kolla_copy_cacerts", "rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL", "+ sudo kolla_install_projects", "+ [[ ! -n '' ]]", "+ . kolla_extend_start", "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", "++ [[ ! -d /var/log/kolla/keystone ]]", "++ mkdir -p /var/log/kolla/keystone", "+++ stat -c %U:%G /var/log/kolla/keystone", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", "++ chown keystone:kolla /var/log/kolla/keystone", "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", "++ touch /var/log/kolla/keystone/keystone.log", "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", "+++ stat -c %a /var/log/kolla/keystone", "++ [[ 2755 != \\7\\5\\5 ]]", "++ chmod 755 /var/log/kolla/keystone", "++ EXTRA_KEYSTONE_MANAGE_ARGS=", "++ [[ -n '' ]]", "++ [[ -n '' ]]", "++ [[ -n 0 ]]", "++ sudo -H -u keystone keystone-manage db_sync", "2026-01-01 01:04:28.149 1081 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:397", "2026-01-01 01:04:28.157 1081 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "(Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-01 01:04:28.157 1081 ERROR keystone Traceback (most recent call last):", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-01 01:04:28.157 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection", "2026-01-01 01:04:28.157 1081 ERROR keystone return self.pool.connect()", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-01 01:04:28.157 1081 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-01 01:04:28.157 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-01 01:04:28.157 1081 ERROR keystone rec = pool._do_get()", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-01 01:04:28.157 1081 ERROR keystone with util.safe_reraise():", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-01 01:04:28.157 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-01 01:04:28.157 1081 ERROR keystone return self._create_connection()", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-01 01:04:28.157 1081 ERROR keystone return _ConnectionRecord(self)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-01 01:04:28.157 1081 ERROR keystone self.__connect()", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-01 01:04:28.157 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-01 01:04:28.157 1081 ERROR keystone self(*args, **kw)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-01 01:04:28.157 1081 ERROR keystone fn(*args, **kw)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go", "2026-01-01 01:04:28.157 1081 ERROR keystone return once_fn(*arg, **kw)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect", "2026-01-01 01:04:28.157 1081 ERROR keystone dialect.initialize(c)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize", "2026-01-01 01:04:28.157 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize", "2026-01-01 01:04:28.157 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level", "2026-01-01 01:04:28.157 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level", "2026-01-01 01:04:28.157 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-01 01:04:28.157 1081 ERROR keystone result = self._query(query)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-01 01:04:28.157 1081 ERROR keystone conn.query(q)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-01 01:04:28.157 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-01 01:04:28.157 1081 ERROR keystone result.read()", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-01 01:04:28.157 1081 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-01 01:04:28.157 1081 ERROR keystone packet.raise_for_error()", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-01 01:04:28.157 1081 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-01 01:04:28.157 1081 ERROR keystone raise errorclass(errno, errval)", "2026-01-01 01:04:28.157 1081 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-01 01:04:28.157 1081 ERROR keystone ", "2026-01-01 01:04:28.157 1081 ERROR keystone The above exception was the direct cause of the following exception:", "2026-01-01 01:04:28.157 1081 ERROR keystone ", "2026-01-01 01:04:28.157 1081 ERROR keystone Traceback (most recent call last):", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in ", "2026-01-01 01:04:28.157 1081 ERROR keystone sys.exit(main())", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main", "2026-01-01 01:04:28.157 1081 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1727, in main", "2026-01-01 01:04:28.157 1081 ERROR keystone CONF.command.cmd_class.main()", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 492, in main", "2026-01-01 01:04:28.157 1081 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 321, in offline_sync_database_to_version", "2026-01-01 01:04:28.157 1081 ERROR keystone _db_sync(engine=engine)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 210, in _db_sync", "2026-01-01 01:04:28.157 1081 ERROR keystone with sql.session_for_write() as session:", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-01 01:04:28.157 1081 ERROR keystone return next(self.gen)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1199, in _transaction_scope", "2026-01-01 01:04:28.157 1081 ERROR keystone with current._produce_block(", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-01 01:04:28.157 1081 ERROR keystone return next(self.gen)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 841, in _session", "2026-01-01 01:04:28.157 1081 ERROR keystone self.session = self.factory._create_session(", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 459, in _create_session", "2026-01-01 01:04:28.157 1081 ERROR keystone self._start()", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 530, in _start", "2026-01-01 01:04:28.157 1081 ERROR keystone self._setup_for_connection(", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 647, in _setup_for_connection", "2026-01-01 01:04:28.157 1081 ERROR keystone engine = engines.create_engine(", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator", "2026-01-01 01:04:28.157 1081 ERROR keystone return wrapped(*args, **kwargs)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 271, in create_engine", "2026-01-01 01:04:28.157 1081 ERROR keystone _test_connection(engine_event_target, max_retries, retry_interval)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 169, in _test_connection", "2026-01-01 01:04:28.157 1081 ERROR keystone conn = engine.connect()", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3274, in connect", "2026-01-01 01:04:28.157 1081 ERROR keystone return self._connection_cls(self)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__", "2026-01-01 01:04:28.157 1081 ERROR keystone Connection._handle_dbapi_exception_noconnection(", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2436, in _handle_dbapi_exception_noconnection", "2026-01-01 01:04:28.157 1081 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-01 01:04:28.157 1081 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3298, in raw_connection", "2026-01-01 01:04:28.157 1081 ERROR keystone return self.pool.connect()", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-01 01:04:28.157 1081 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-01 01:04:28.157 1081 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-01 01:04:28.157 1081 ERROR keystone rec = pool._do_get()", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-01 01:04:28.157 1081 ERROR keystone with util.safe_reraise():", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-01 01:04:28.157 1081 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-01 01:04:28.157 1081 ERROR keystone return self._create_connection()", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-01 01:04:28.157 1081 ERROR keystone return _ConnectionRecord(self)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-01 01:04:28.157 1081 ERROR keystone self.__connect()", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-01 01:04:28.157 1081 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-01 01:04:28.157 1081 ERROR keystone self(*args, **kw)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-01 01:04:28.157 1081 ERROR keystone fn(*args, **kw)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1916, in go", "2026-01-01 01:04:28.157 1081 ERROR keystone return once_fn(*arg, **kw)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 752, in first_connect", "2026-01-01 01:04:28.157 1081 ERROR keystone dialect.initialize(c)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2898, in initialize", "2026-01-01 01:04:28.157 1081 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 533, in initialize", "2026-01-01 01:04:28.157 1081 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 584, in get_default_isolation_level", "2026-01-01 01:04:28.157 1081 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2603, in get_isolation_level", "2026-01-01 01:04:28.157 1081 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-01 01:04:28.157 1081 ERROR keystone result = self._query(query)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-01 01:04:28.157 1081 ERROR keystone conn.query(q)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-01 01:04:28.157 1081 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-01 01:04:28.157 1081 ERROR keystone result.read()", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-01 01:04:28.157 1081 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-01 01:04:28.157 1081 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-01 01:04:28.157 1081 ERROR keystone packet.raise_for_error()", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-01 01:04:28.157 1081 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-01 01:04:28.157 1081 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-01 01:04:28.157 1081 ERROR keystone raise errorclass(errno, errval)", "2026-01-01 01:04:28.157 1081 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-01 01:04:28.157 1081 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-01 01:04:28.157 1081 ERROR keystone "], "stdout": "Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n", "stdout_lines": ["Updating certificates in /etc/ssl/certs...", "1 added, 0 removed; done.", "Running hooks in /etc/ca-certificates/update.d...", "done."]} 2026-01-01 01:04:31.376371 | orchestrator | 2026-01-01 01:04:31.376381 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:04:31.376389 | orchestrator | testbed-node-0 : ok=22  changed=12  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-01-01 01:04:31.376399 | orchestrator | testbed-node-1 : ok=18  changed=10  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-01 01:04:31.376408 | orchestrator | testbed-node-2 : ok=18  changed=10  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-01 01:04:31.376416 | orchestrator | 2026-01-01 01:04:31.376424 | orchestrator | 2026-01-01 01:04:31.376432 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:04:31.376440 | orchestrator | Thursday 01 January 2026 01:04:29 +0000 (0:00:12.872) 0:01:03.471 ****** 2026-01-01 01:04:31.376448 | orchestrator | =============================================================================== 2026-01-01 01:04:31.376456 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.87s 2026-01-01 01:04:31.376463 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.07s 2026-01-01 01:04:31.376471 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.80s 2026-01-01 01:04:31.376479 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.20s 2026-01-01 01:04:31.376487 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.13s 2026-01-01 01:04:31.376494 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.87s 2026-01-01 01:04:31.376502 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.80s 2026-01-01 01:04:31.376510 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.60s 2026-01-01 01:04:31.376603 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.55s 2026-01-01 01:04:31.376613 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.89s 2026-01-01 01:04:31.376621 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.87s 2026-01-01 01:04:31.376629 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.44s 2026-01-01 01:04:31.376637 | orchestrator | keystone : Checking for any running keystone_fernet containers ---------- 1.16s 2026-01-01 01:04:31.376645 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.07s 2026-01-01 01:04:31.376659 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 0.94s 2026-01-01 01:04:31.376667 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.92s 2026-01-01 01:04:31.376674 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.83s 2026-01-01 01:04:31.376682 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.73s 2026-01-01 01:04:31.376690 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 0.68s 2026-01-01 01:04:31.376698 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 0.61s 2026-01-01 01:04:31.376713 | orchestrator | 2026-01-01 01:04:31 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:04:31.376722 | orchestrator | 2026-01-01 01:04:31 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:31.376730 | orchestrator | 2026-01-01 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:34.413388 | orchestrator | 2026-01-01 01:04:34 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:04:34.413490 | orchestrator | 2026-01-01 01:04:34 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:04:34.414198 | orchestrator | 2026-01-01 01:04:34 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:34.417244 | orchestrator | 2026-01-01 01:04:34 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:04:34.417826 | orchestrator | 2026-01-01 01:04:34 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:34.418214 | orchestrator | 2026-01-01 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:37.457898 | orchestrator | 2026-01-01 01:04:37 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:04:37.458005 | orchestrator | 2026-01-01 01:04:37 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:04:37.458906 | orchestrator | 2026-01-01 01:04:37 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:37.460012 | orchestrator | 2026-01-01 01:04:37 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:04:37.460960 | orchestrator | 2026-01-01 01:04:37 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:37.460985 | orchestrator | 2026-01-01 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:40.520497 | orchestrator | 2026-01-01 01:04:40 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:04:40.523034 | orchestrator | 2026-01-01 01:04:40 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:04:40.526594 | orchestrator | 2026-01-01 01:04:40 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:40.529610 | orchestrator | 2026-01-01 01:04:40 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:04:40.531830 | orchestrator | 2026-01-01 01:04:40 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state STARTED 2026-01-01 01:04:40.532029 | orchestrator | 2026-01-01 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:43.577815 | orchestrator | 2026-01-01 01:04:43 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:04:43.581218 | orchestrator | 2026-01-01 01:04:43 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:04:43.584001 | orchestrator | 2026-01-01 01:04:43 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:04:43.586659 | orchestrator | 2026-01-01 01:04:43 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:43.588863 | orchestrator | 2026-01-01 01:04:43 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:04:43.590421 | orchestrator | 2026-01-01 01:04:43 | INFO  | Task 39bfbbad-ff12-4eac-9f5b-732fb4ca0fa6 is in state SUCCESS 2026-01-01 01:04:43.590464 | orchestrator | 2026-01-01 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:46.641691 | orchestrator | 2026-01-01 01:04:46 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:04:46.643239 | orchestrator | 2026-01-01 01:04:46 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:04:46.644244 | orchestrator | 2026-01-01 01:04:46 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:04:46.645524 | orchestrator | 2026-01-01 01:04:46 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:46.646536 | orchestrator | 2026-01-01 01:04:46 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:04:46.646579 | orchestrator | 2026-01-01 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:49.693257 | orchestrator | 2026-01-01 01:04:49 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:04:49.694167 | orchestrator | 2026-01-01 01:04:49 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:04:49.695187 | orchestrator | 2026-01-01 01:04:49 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:04:49.696393 | orchestrator | 2026-01-01 01:04:49 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:49.698854 | orchestrator | 2026-01-01 01:04:49 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:04:49.698893 | orchestrator | 2026-01-01 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:52.747062 | orchestrator | 2026-01-01 01:04:52 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:04:52.749661 | orchestrator | 2026-01-01 01:04:52 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:04:52.751879 | orchestrator | 2026-01-01 01:04:52 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:04:52.754263 | orchestrator | 2026-01-01 01:04:52 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:52.756008 | orchestrator | 2026-01-01 01:04:52 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:04:52.756068 | orchestrator | 2026-01-01 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:55.802384 | orchestrator | 2026-01-01 01:04:55 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:04:55.803507 | orchestrator | 2026-01-01 01:04:55 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:04:55.804986 | orchestrator | 2026-01-01 01:04:55 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:04:55.806526 | orchestrator | 2026-01-01 01:04:55 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:55.807976 | orchestrator | 2026-01-01 01:04:55 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:04:55.807998 | orchestrator | 2026-01-01 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:58.850912 | orchestrator | 2026-01-01 01:04:58 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:04:58.852084 | orchestrator | 2026-01-01 01:04:58 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:04:58.854901 | orchestrator | 2026-01-01 01:04:58 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:04:58.856821 | orchestrator | 2026-01-01 01:04:58 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:04:58.857501 | orchestrator | 2026-01-01 01:04:58 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:04:58.857537 | orchestrator | 2026-01-01 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:01.897238 | orchestrator | 2026-01-01 01:05:01 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:01.899445 | orchestrator | 2026-01-01 01:05:01 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:01.901817 | orchestrator | 2026-01-01 01:05:01 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:01.903773 | orchestrator | 2026-01-01 01:05:01 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:05:01.905513 | orchestrator | 2026-01-01 01:05:01 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:01.905538 | orchestrator | 2026-01-01 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:04.951077 | orchestrator | 2026-01-01 01:05:04 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:04.951482 | orchestrator | 2026-01-01 01:05:04 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:04.953425 | orchestrator | 2026-01-01 01:05:04 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:04.954633 | orchestrator | 2026-01-01 01:05:04 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:05:04.956180 | orchestrator | 2026-01-01 01:05:04 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:04.956213 | orchestrator | 2026-01-01 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:07.995192 | orchestrator | 2026-01-01 01:05:07 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:07.997836 | orchestrator | 2026-01-01 01:05:07 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:08.000656 | orchestrator | 2026-01-01 01:05:08 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:08.003266 | orchestrator | 2026-01-01 01:05:08 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:05:08.004215 | orchestrator | 2026-01-01 01:05:08 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:08.004256 | orchestrator | 2026-01-01 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:11.052610 | orchestrator | 2026-01-01 01:05:11 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:11.054748 | orchestrator | 2026-01-01 01:05:11 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:11.057241 | orchestrator | 2026-01-01 01:05:11 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:11.060511 | orchestrator | 2026-01-01 01:05:11 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state STARTED 2026-01-01 01:05:11.063095 | orchestrator | 2026-01-01 01:05:11 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:11.063126 | orchestrator | 2026-01-01 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:14.107899 | orchestrator | 2026-01-01 01:05:14 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:14.109381 | orchestrator | 2026-01-01 01:05:14 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:14.111118 | orchestrator | 2026-01-01 01:05:14 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:14.113256 | orchestrator | 2026-01-01 01:05:14 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:14.115784 | orchestrator | 2026-01-01 01:05:14 | INFO  | Task 97ab5754-938b-4217-b65a-e76670d5914a is in state SUCCESS 2026-01-01 01:05:14.117954 | orchestrator | 2026-01-01 01:05:14.117991 | orchestrator | 2026-01-01 01:05:14.118004 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-01 01:05:14.118065 | orchestrator | 2026-01-01 01:05:14.118078 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-01 01:05:14.118193 | orchestrator | Thursday 01 January 2026 01:04:05 +0000 (0:00:00.200) 0:00:00.200 ****** 2026-01-01 01:05:14.118211 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-01 01:05:14.118275 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.118290 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.118351 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-01 01:05:14.118364 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.118375 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-01 01:05:14.118386 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-01 01:05:14.118398 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-01 01:05:14.118409 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-01 01:05:14.118420 | orchestrator | 2026-01-01 01:05:14.118431 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-01 01:05:14.118442 | orchestrator | Thursday 01 January 2026 01:04:10 +0000 (0:00:04.849) 0:00:05.050 ****** 2026-01-01 01:05:14.118453 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-01 01:05:14.118464 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.118934 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.118952 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-01 01:05:14.118963 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.118975 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-01 01:05:14.118986 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-01 01:05:14.118997 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-01 01:05:14.119008 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-01 01:05:14.119042 | orchestrator | 2026-01-01 01:05:14.119069 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-01 01:05:14.119080 | orchestrator | Thursday 01 January 2026 01:04:15 +0000 (0:00:04.520) 0:00:09.570 ****** 2026-01-01 01:05:14.119092 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-01 01:05:14.119103 | orchestrator | 2026-01-01 01:05:14.119114 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-01 01:05:14.119126 | orchestrator | Thursday 01 January 2026 01:04:15 +0000 (0:00:00.908) 0:00:10.479 ****** 2026-01-01 01:05:14.119137 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-01 01:05:14.119148 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.119159 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.119170 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-01 01:05:14.119181 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.119192 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-01 01:05:14.119202 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-01 01:05:14.119213 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-01 01:05:14.119225 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-01 01:05:14.119236 | orchestrator | 2026-01-01 01:05:14.119247 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-01 01:05:14.119258 | orchestrator | Thursday 01 January 2026 01:04:29 +0000 (0:00:13.788) 0:00:24.267 ****** 2026-01-01 01:05:14.119269 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-01 01:05:14.119280 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-01 01:05:14.119291 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-01 01:05:14.119330 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-01 01:05:14.119354 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-01 01:05:14.119366 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-01 01:05:14.119377 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-01 01:05:14.119409 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-01 01:05:14.119421 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-01 01:05:14.119431 | orchestrator | 2026-01-01 01:05:14.119443 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-01 01:05:14.119454 | orchestrator | Thursday 01 January 2026 01:04:32 +0000 (0:00:03.204) 0:00:27.471 ****** 2026-01-01 01:05:14.119466 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-01 01:05:14.119477 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.119488 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.119499 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-01 01:05:14.119510 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-01 01:05:14.119521 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-01 01:05:14.119534 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-01 01:05:14.119556 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-01 01:05:14.119569 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-01 01:05:14.119582 | orchestrator | 2026-01-01 01:05:14.119595 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:05:14.119609 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:05:14.119623 | orchestrator | 2026-01-01 01:05:14.119637 | orchestrator | 2026-01-01 01:05:14.119651 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:05:14.119664 | orchestrator | Thursday 01 January 2026 01:04:40 +0000 (0:00:07.346) 0:00:34.818 ****** 2026-01-01 01:05:14.119677 | orchestrator | =============================================================================== 2026-01-01 01:05:14.119694 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.79s 2026-01-01 01:05:14.119714 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.35s 2026-01-01 01:05:14.119733 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.85s 2026-01-01 01:05:14.119756 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.52s 2026-01-01 01:05:14.119784 | orchestrator | Check if target directories exist --------------------------------------- 3.20s 2026-01-01 01:05:14.119803 | orchestrator | Create share directory -------------------------------------------------- 0.91s 2026-01-01 01:05:14.119821 | orchestrator | 2026-01-01 01:05:14.119838 | orchestrator | 2026-01-01 01:05:14.119856 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:05:14.119873 | orchestrator | 2026-01-01 01:05:14.119892 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:05:14.119909 | orchestrator | Thursday 01 January 2026 01:03:26 +0000 (0:00:00.208) 0:00:00.208 ****** 2026-01-01 01:05:14.119926 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.119944 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.119961 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.119978 | orchestrator | 2026-01-01 01:05:14.119994 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:05:14.120013 | orchestrator | Thursday 01 January 2026 01:03:26 +0000 (0:00:00.215) 0:00:00.424 ****** 2026-01-01 01:05:14.120032 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-01 01:05:14.120050 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-01 01:05:14.120069 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-01 01:05:14.120086 | orchestrator | 2026-01-01 01:05:14.120104 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-01 01:05:14.120116 | orchestrator | 2026-01-01 01:05:14.120127 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-01 01:05:14.120138 | orchestrator | Thursday 01 January 2026 01:03:26 +0000 (0:00:00.355) 0:00:00.780 ****** 2026-01-01 01:05:14.120149 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:05:14.120159 | orchestrator | 2026-01-01 01:05:14.120170 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-01 01:05:14.120181 | orchestrator | Thursday 01 January 2026 01:03:27 +0000 (0:00:00.418) 0:00:01.199 ****** 2026-01-01 01:05:14.120343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:05:14.120392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:05:14.120419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:05:14.120441 | orchestrator | 2026-01-01 01:05:14.120453 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-01 01:05:14.120464 | orchestrator | Thursday 01 January 2026 01:03:28 +0000 (0:00:01.089) 0:00:02.288 ****** 2026-01-01 01:05:14.120475 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.120486 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.120497 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.120509 | orchestrator | 2026-01-01 01:05:14.120520 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-01 01:05:14.120535 | orchestrator | Thursday 01 January 2026 01:03:28 +0000 (0:00:00.395) 0:00:02.684 ****** 2026-01-01 01:05:14.120547 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-01 01:05:14.120558 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-01 01:05:14.120569 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-01 01:05:14.120580 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-01 01:05:14.120591 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-01 01:05:14.120602 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-01 01:05:14.120612 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-01 01:05:14.120623 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-01 01:05:14.120634 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-01 01:05:14.120645 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-01 01:05:14.120657 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-01 01:05:14.120667 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-01 01:05:14.120678 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-01 01:05:14.120696 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-01 01:05:14.120707 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-01 01:05:14.120718 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-01 01:05:14.120729 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-01 01:05:14.120740 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-01 01:05:14.120751 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-01 01:05:14.120762 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-01 01:05:14.120779 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-01 01:05:14.120791 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-01 01:05:14.120802 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-01 01:05:14.120812 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-01 01:05:14.120824 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-01 01:05:14.120837 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-01 01:05:14.120849 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-01 01:05:14.120860 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-01 01:05:14.120871 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-01 01:05:14.120882 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-01 01:05:14.120893 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-01 01:05:14.120904 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-01 01:05:14.120915 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-01 01:05:14.120926 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-01 01:05:14.120937 | orchestrator | 2026-01-01 01:05:14.120948 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:05:14.120960 | orchestrator | Thursday 01 January 2026 01:03:29 +0000 (0:00:00.760) 0:00:03.444 ****** 2026-01-01 01:05:14.120971 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.120981 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.120992 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.121003 | orchestrator | 2026-01-01 01:05:14.121019 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:05:14.121030 | orchestrator | Thursday 01 January 2026 01:03:29 +0000 (0:00:00.285) 0:00:03.730 ****** 2026-01-01 01:05:14.121041 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.121052 | orchestrator | 2026-01-01 01:05:14.121063 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:05:14.121085 | orchestrator | Thursday 01 January 2026 01:03:29 +0000 (0:00:00.118) 0:00:03.848 ****** 2026-01-01 01:05:14.121096 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.121107 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.121118 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.121129 | orchestrator | 2026-01-01 01:05:14.121140 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:05:14.121151 | orchestrator | Thursday 01 January 2026 01:03:30 +0000 (0:00:00.410) 0:00:04.259 ****** 2026-01-01 01:05:14.121162 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.121173 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.121184 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.121195 | orchestrator | 2026-01-01 01:05:14.121206 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:05:14.121217 | orchestrator | Thursday 01 January 2026 01:03:30 +0000 (0:00:00.279) 0:00:04.538 ****** 2026-01-01 01:05:14.121228 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.121239 | orchestrator | 2026-01-01 01:05:14.121250 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:05:14.121261 | orchestrator | Thursday 01 January 2026 01:03:30 +0000 (0:00:00.123) 0:00:04.661 ****** 2026-01-01 01:05:14.121271 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.121282 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.121293 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.121445 | orchestrator | 2026-01-01 01:05:14.121463 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:05:14.121475 | orchestrator | Thursday 01 January 2026 01:03:30 +0000 (0:00:00.265) 0:00:04.926 ****** 2026-01-01 01:05:14.121486 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.121496 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.121507 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.121518 | orchestrator | 2026-01-01 01:05:14.121529 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:05:14.121540 | orchestrator | Thursday 01 January 2026 01:03:31 +0000 (0:00:00.277) 0:00:05.204 ****** 2026-01-01 01:05:14.121551 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.121562 | orchestrator | 2026-01-01 01:05:14.121572 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:05:14.121583 | orchestrator | Thursday 01 January 2026 01:03:31 +0000 (0:00:00.274) 0:00:05.478 ****** 2026-01-01 01:05:14.121606 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.121618 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.121629 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.121640 | orchestrator | 2026-01-01 01:05:14.121651 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:05:14.121662 | orchestrator | Thursday 01 January 2026 01:03:31 +0000 (0:00:00.276) 0:00:05.755 ****** 2026-01-01 01:05:14.121672 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.121683 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.121694 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.121705 | orchestrator | 2026-01-01 01:05:14.121716 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:05:14.121727 | orchestrator | Thursday 01 January 2026 01:03:31 +0000 (0:00:00.276) 0:00:06.032 ****** 2026-01-01 01:05:14.121738 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.121765 | orchestrator | 2026-01-01 01:05:14.121787 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:05:14.121798 | orchestrator | Thursday 01 January 2026 01:03:32 +0000 (0:00:00.116) 0:00:06.148 ****** 2026-01-01 01:05:14.121809 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.121820 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.121831 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.121842 | orchestrator | 2026-01-01 01:05:14.121853 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:05:14.121876 | orchestrator | Thursday 01 January 2026 01:03:32 +0000 (0:00:00.263) 0:00:06.411 ****** 2026-01-01 01:05:14.121886 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.121898 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.121909 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.121919 | orchestrator | 2026-01-01 01:05:14.121930 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:05:14.121941 | orchestrator | Thursday 01 January 2026 01:03:32 +0000 (0:00:00.409) 0:00:06.820 ****** 2026-01-01 01:05:14.121952 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.121963 | orchestrator | 2026-01-01 01:05:14.121974 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:05:14.121985 | orchestrator | Thursday 01 January 2026 01:03:32 +0000 (0:00:00.117) 0:00:06.938 ****** 2026-01-01 01:05:14.121996 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.122006 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.122083 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.122099 | orchestrator | 2026-01-01 01:05:14.122110 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:05:14.122121 | orchestrator | Thursday 01 January 2026 01:03:33 +0000 (0:00:00.278) 0:00:07.217 ****** 2026-01-01 01:05:14.122132 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.122143 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.122153 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.122164 | orchestrator | 2026-01-01 01:05:14.122175 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:05:14.122186 | orchestrator | Thursday 01 January 2026 01:03:33 +0000 (0:00:00.301) 0:00:07.518 ****** 2026-01-01 01:05:14.122197 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.122208 | orchestrator | 2026-01-01 01:05:14.122219 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:05:14.122230 | orchestrator | Thursday 01 January 2026 01:03:33 +0000 (0:00:00.121) 0:00:07.639 ****** 2026-01-01 01:05:14.122241 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.122252 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.122269 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.122281 | orchestrator | 2026-01-01 01:05:14.122292 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:05:14.122334 | orchestrator | Thursday 01 January 2026 01:03:33 +0000 (0:00:00.259) 0:00:07.899 ****** 2026-01-01 01:05:14.122345 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.122356 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.122367 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.122378 | orchestrator | 2026-01-01 01:05:14.122389 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:05:14.122400 | orchestrator | Thursday 01 January 2026 01:03:34 +0000 (0:00:00.475) 0:00:08.374 ****** 2026-01-01 01:05:14.122411 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.122422 | orchestrator | 2026-01-01 01:05:14.122433 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:05:14.122444 | orchestrator | Thursday 01 January 2026 01:03:34 +0000 (0:00:00.118) 0:00:08.492 ****** 2026-01-01 01:05:14.122455 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.122466 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.122477 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.122488 | orchestrator | 2026-01-01 01:05:14.122499 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:05:14.122510 | orchestrator | Thursday 01 January 2026 01:03:34 +0000 (0:00:00.308) 0:00:08.801 ****** 2026-01-01 01:05:14.122521 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.122532 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.122543 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.122554 | orchestrator | 2026-01-01 01:05:14.122565 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:05:14.122577 | orchestrator | Thursday 01 January 2026 01:03:34 +0000 (0:00:00.268) 0:00:09.069 ****** 2026-01-01 01:05:14.122595 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.122606 | orchestrator | 2026-01-01 01:05:14.122617 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:05:14.122628 | orchestrator | Thursday 01 January 2026 01:03:35 +0000 (0:00:00.120) 0:00:09.190 ****** 2026-01-01 01:05:14.122639 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.122650 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.122661 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.122672 | orchestrator | 2026-01-01 01:05:14.122683 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:05:14.122694 | orchestrator | Thursday 01 January 2026 01:03:35 +0000 (0:00:00.434) 0:00:09.625 ****** 2026-01-01 01:05:14.122705 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.122716 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.122727 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.122738 | orchestrator | 2026-01-01 01:05:14.122757 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:05:14.122768 | orchestrator | Thursday 01 January 2026 01:03:35 +0000 (0:00:00.404) 0:00:10.029 ****** 2026-01-01 01:05:14.122779 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.122790 | orchestrator | 2026-01-01 01:05:14.122801 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:05:14.122812 | orchestrator | Thursday 01 January 2026 01:03:36 +0000 (0:00:00.121) 0:00:10.151 ****** 2026-01-01 01:05:14.122823 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.122833 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.122845 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.122856 | orchestrator | 2026-01-01 01:05:14.122867 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:05:14.122878 | orchestrator | Thursday 01 January 2026 01:03:36 +0000 (0:00:00.360) 0:00:10.512 ****** 2026-01-01 01:05:14.122889 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:14.122900 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:14.122911 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:14.122922 | orchestrator | 2026-01-01 01:05:14.122933 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:05:14.122944 | orchestrator | Thursday 01 January 2026 01:03:36 +0000 (0:00:00.291) 0:00:10.803 ****** 2026-01-01 01:05:14.122955 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.122966 | orchestrator | 2026-01-01 01:05:14.122977 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:05:14.122988 | orchestrator | Thursday 01 January 2026 01:03:36 +0000 (0:00:00.136) 0:00:10.940 ****** 2026-01-01 01:05:14.122998 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.123010 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.123021 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.123031 | orchestrator | 2026-01-01 01:05:14.123042 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-01 01:05:14.123053 | orchestrator | Thursday 01 January 2026 01:03:37 +0000 (0:00:00.430) 0:00:11.371 ****** 2026-01-01 01:05:14.123064 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:05:14.123075 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:14.123086 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:05:14.123097 | orchestrator | 2026-01-01 01:05:14.123108 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-01 01:05:14.123119 | orchestrator | Thursday 01 January 2026 01:03:38 +0000 (0:00:01.577) 0:00:12.948 ****** 2026-01-01 01:05:14.123130 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-01 01:05:14.123141 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-01 01:05:14.123152 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-01 01:05:14.123169 | orchestrator | 2026-01-01 01:05:14.123180 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-01 01:05:14.123191 | orchestrator | Thursday 01 January 2026 01:03:40 +0000 (0:00:02.007) 0:00:14.955 ****** 2026-01-01 01:05:14.123202 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-01 01:05:14.123214 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-01 01:05:14.123230 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-01 01:05:14.123241 | orchestrator | 2026-01-01 01:05:14.123252 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-01 01:05:14.123263 | orchestrator | Thursday 01 January 2026 01:03:43 +0000 (0:00:02.249) 0:00:17.205 ****** 2026-01-01 01:05:14.123274 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-01 01:05:14.123285 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-01 01:05:14.123314 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-01 01:05:14.123327 | orchestrator | 2026-01-01 01:05:14.123337 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-01 01:05:14.123348 | orchestrator | Thursday 01 January 2026 01:03:45 +0000 (0:00:02.167) 0:00:19.373 ****** 2026-01-01 01:05:14.123359 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.123370 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.123381 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.123392 | orchestrator | 2026-01-01 01:05:14.123403 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-01 01:05:14.123414 | orchestrator | Thursday 01 January 2026 01:03:45 +0000 (0:00:00.361) 0:00:19.735 ****** 2026-01-01 01:05:14.123425 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.123436 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.123447 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.123457 | orchestrator | 2026-01-01 01:05:14.123469 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-01 01:05:14.123480 | orchestrator | Thursday 01 January 2026 01:03:45 +0000 (0:00:00.301) 0:00:20.036 ****** 2026-01-01 01:05:14.123491 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:05:14.123502 | orchestrator | 2026-01-01 01:05:14.123513 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-01 01:05:14.123524 | orchestrator | Thursday 01 January 2026 01:03:46 +0000 (0:00:00.811) 0:00:20.848 ****** 2026-01-01 01:05:14.123549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:05:14.123588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:05:14.123603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:05:14.123621 | orchestrator | 2026-01-01 01:05:14.123637 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-01 01:05:14.123649 | orchestrator | Thursday 01 January 2026 01:03:48 +0000 (0:00:01.508) 0:00:22.357 ****** 2026-01-01 01:05:14.123669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:05:14.123682 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.123699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:05:14.123718 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.123738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:05:14.123756 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.123767 | orchestrator | 2026-01-01 01:05:14.123778 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-01 01:05:14.123789 | orchestrator | Thursday 01 January 2026 01:03:49 +0000 (0:00:00.947) 0:00:23.304 ****** 2026-01-01 01:05:14.123807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:05:14.123819 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.123840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:05:14.123858 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.123875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:05:14.123888 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.123898 | orchestrator | 2026-01-01 01:05:14.123909 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-01-01 01:05:14.123921 | orchestrator | Thursday 01 January 2026 01:03:50 +0000 (0:00:00.929) 0:00:24.234 ****** 2026-01-01 01:05:14.123942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:05:14.123975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:05:14.124000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:05:14.124013 | orchestrator | 2026-01-01 01:05:14.124025 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-01-01 01:05:14.124036 | orchestrator | Thursday 01 January 2026 01:03:51 +0000 (0:00:01.562) 0:00:25.797 ****** 2026-01-01 01:05:14.124047 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 01:05:14.124058 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:05:14.124069 | orchestrator | } 2026-01-01 01:05:14.124080 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 01:05:14.124091 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:05:14.124101 | orchestrator | } 2026-01-01 01:05:14.124112 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 01:05:14.124123 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:05:14.124134 | orchestrator | } 2026-01-01 01:05:14.124145 | orchestrator | 2026-01-01 01:05:14.124156 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 01:05:14.124167 | orchestrator | Thursday 01 January 2026 01:03:51 +0000 (0:00:00.296) 0:00:26.094 ****** 2026-01-01 01:05:14.124187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:05:14.124207 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.124225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:05:14.124237 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.124257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:05:14.124280 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.124291 | orchestrator | 2026-01-01 01:05:14.124323 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-01 01:05:14.124334 | orchestrator | Thursday 01 January 2026 01:03:52 +0000 (0:00:01.016) 0:00:27.110 ****** 2026-01-01 01:05:14.124345 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:14.124356 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:14.124368 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:14.124379 | orchestrator | 2026-01-01 01:05:14.124390 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-01 01:05:14.124401 | orchestrator | Thursday 01 January 2026 01:03:53 +0000 (0:00:00.425) 0:00:27.536 ****** 2026-01-01 01:05:14.124412 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:05:14.124423 | orchestrator | 2026-01-01 01:05:14.124434 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-01 01:05:14.124449 | orchestrator | Thursday 01 January 2026 01:03:53 +0000 (0:00:00.484) 0:00:28.021 ****** 2026-01-01 01:05:14.124460 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:14.124472 | orchestrator | 2026-01-01 01:05:14.124483 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-01 01:05:14.124494 | orchestrator | Thursday 01 January 2026 01:03:56 +0000 (0:00:02.351) 0:00:30.373 ****** 2026-01-01 01:05:14.124505 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:14.124516 | orchestrator | 2026-01-01 01:05:14.124527 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-01 01:05:14.124538 | orchestrator | Thursday 01 January 2026 01:03:58 +0000 (0:00:02.333) 0:00:32.706 ****** 2026-01-01 01:05:14.124549 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:14.124560 | orchestrator | 2026-01-01 01:05:14.124571 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-01 01:05:14.124582 | orchestrator | Thursday 01 January 2026 01:04:16 +0000 (0:00:17.567) 0:00:50.273 ****** 2026-01-01 01:05:14.124600 | orchestrator | 2026-01-01 01:05:14.124611 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-01 01:05:14.124622 | orchestrator | Thursday 01 January 2026 01:04:16 +0000 (0:00:00.064) 0:00:50.338 ****** 2026-01-01 01:05:14.124633 | orchestrator | 2026-01-01 01:05:14.124644 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-01 01:05:14.124655 | orchestrator | Thursday 01 January 2026 01:04:16 +0000 (0:00:00.194) 0:00:50.533 ****** 2026-01-01 01:05:14.124666 | orchestrator | 2026-01-01 01:05:14.124677 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-01 01:05:14.124688 | orchestrator | Thursday 01 January 2026 01:04:16 +0000 (0:00:00.060) 0:00:50.593 ****** 2026-01-01 01:05:14.124699 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:14.124710 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:05:14.124721 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:05:14.124732 | orchestrator | 2026-01-01 01:05:14.124743 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:05:14.124755 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-01-01 01:05:14.124766 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-01 01:05:14.124783 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-01 01:05:14.124795 | orchestrator | 2026-01-01 01:05:14.124805 | orchestrator | 2026-01-01 01:05:14.124816 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:05:14.124827 | orchestrator | Thursday 01 January 2026 01:05:10 +0000 (0:00:54.325) 0:01:44.918 ****** 2026-01-01 01:05:14.124838 | orchestrator | =============================================================================== 2026-01-01 01:05:14.124849 | orchestrator | horizon : Restart horizon container ------------------------------------ 54.33s 2026-01-01 01:05:14.124860 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.57s 2026-01-01 01:05:14.124871 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.35s 2026-01-01 01:05:14.124882 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.33s 2026-01-01 01:05:14.124893 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.25s 2026-01-01 01:05:14.124904 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.17s 2026-01-01 01:05:14.124915 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.01s 2026-01-01 01:05:14.124926 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.58s 2026-01-01 01:05:14.124937 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.56s 2026-01-01 01:05:14.124948 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.51s 2026-01-01 01:05:14.124958 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.09s 2026-01-01 01:05:14.124969 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.02s 2026-01-01 01:05:14.124980 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.95s 2026-01-01 01:05:14.124991 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.93s 2026-01-01 01:05:14.125002 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-01-01 01:05:14.125013 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2026-01-01 01:05:14.125024 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.48s 2026-01-01 01:05:14.125035 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2026-01-01 01:05:14.125046 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.43s 2026-01-01 01:05:14.125064 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.43s 2026-01-01 01:05:14.125075 | orchestrator | 2026-01-01 01:05:14 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:14.125086 | orchestrator | 2026-01-01 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:17.175594 | orchestrator | 2026-01-01 01:05:17 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:17.176972 | orchestrator | 2026-01-01 01:05:17 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:17.179960 | orchestrator | 2026-01-01 01:05:17 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:17.183087 | orchestrator | 2026-01-01 01:05:17 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:17.185847 | orchestrator | 2026-01-01 01:05:17 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:17.186248 | orchestrator | 2026-01-01 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:20.235666 | orchestrator | 2026-01-01 01:05:20 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:20.237492 | orchestrator | 2026-01-01 01:05:20 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:20.240487 | orchestrator | 2026-01-01 01:05:20 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:20.243082 | orchestrator | 2026-01-01 01:05:20 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:20.245321 | orchestrator | 2026-01-01 01:05:20 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:20.245791 | orchestrator | 2026-01-01 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:23.291480 | orchestrator | 2026-01-01 01:05:23 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:23.293518 | orchestrator | 2026-01-01 01:05:23 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:23.297000 | orchestrator | 2026-01-01 01:05:23 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:23.297460 | orchestrator | 2026-01-01 01:05:23 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:23.298598 | orchestrator | 2026-01-01 01:05:23 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:23.298950 | orchestrator | 2026-01-01 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:26.349741 | orchestrator | 2026-01-01 01:05:26 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:26.352397 | orchestrator | 2026-01-01 01:05:26 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:26.355681 | orchestrator | 2026-01-01 01:05:26 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:26.357375 | orchestrator | 2026-01-01 01:05:26 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:26.359256 | orchestrator | 2026-01-01 01:05:26 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:26.359548 | orchestrator | 2026-01-01 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:29.406971 | orchestrator | 2026-01-01 01:05:29 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:29.410391 | orchestrator | 2026-01-01 01:05:29 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:29.411278 | orchestrator | 2026-01-01 01:05:29 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:29.412178 | orchestrator | 2026-01-01 01:05:29 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:29.415633 | orchestrator | 2026-01-01 01:05:29 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:29.415705 | orchestrator | 2026-01-01 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:32.458529 | orchestrator | 2026-01-01 01:05:32 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:32.459107 | orchestrator | 2026-01-01 01:05:32 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:32.460344 | orchestrator | 2026-01-01 01:05:32 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:32.461248 | orchestrator | 2026-01-01 01:05:32 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:32.463183 | orchestrator | 2026-01-01 01:05:32 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:32.463226 | orchestrator | 2026-01-01 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:35.521245 | orchestrator | 2026-01-01 01:05:35 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:35.524699 | orchestrator | 2026-01-01 01:05:35 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:35.526453 | orchestrator | 2026-01-01 01:05:35 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:35.528345 | orchestrator | 2026-01-01 01:05:35 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:35.529974 | orchestrator | 2026-01-01 01:05:35 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:35.530209 | orchestrator | 2026-01-01 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:38.579996 | orchestrator | 2026-01-01 01:05:38 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:38.582159 | orchestrator | 2026-01-01 01:05:38 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:38.585645 | orchestrator | 2026-01-01 01:05:38 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:38.589316 | orchestrator | 2026-01-01 01:05:38 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:38.592444 | orchestrator | 2026-01-01 01:05:38 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:38.592483 | orchestrator | 2026-01-01 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:41.633764 | orchestrator | 2026-01-01 01:05:41 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:41.636477 | orchestrator | 2026-01-01 01:05:41 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state STARTED 2026-01-01 01:05:41.639416 | orchestrator | 2026-01-01 01:05:41 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:41.641945 | orchestrator | 2026-01-01 01:05:41 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state STARTED 2026-01-01 01:05:41.643709 | orchestrator | 2026-01-01 01:05:41 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state STARTED 2026-01-01 01:05:41.644176 | orchestrator | 2026-01-01 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:44.689623 | orchestrator | 2026-01-01 01:05:44 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:44.691556 | orchestrator | 2026-01-01 01:05:44 | INFO  | Task ea282a8d-d75f-4c6f-b042-950e4c44d536 is in state SUCCESS 2026-01-01 01:05:44.691874 | orchestrator | 2026-01-01 01:05:44.691902 | orchestrator | 2026-01-01 01:05:44.691915 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:05:44.691927 | orchestrator | 2026-01-01 01:05:44.691939 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:05:44.691950 | orchestrator | Thursday 01 January 2026 01:04:34 +0000 (0:00:00.352) 0:00:00.352 ****** 2026-01-01 01:05:44.691961 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:44.691974 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:44.691985 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:44.691996 | orchestrator | 2026-01-01 01:05:44.692007 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:05:44.692019 | orchestrator | Thursday 01 January 2026 01:04:35 +0000 (0:00:00.812) 0:00:01.165 ****** 2026-01-01 01:05:44.692030 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-01 01:05:44.692042 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-01 01:05:44.692053 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-01 01:05:44.692064 | orchestrator | 2026-01-01 01:05:44.692075 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-01 01:05:44.692086 | orchestrator | 2026-01-01 01:05:44.692097 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-01 01:05:44.692109 | orchestrator | Thursday 01 January 2026 01:04:36 +0000 (0:00:00.597) 0:00:01.762 ****** 2026-01-01 01:05:44.692121 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:05:44.692133 | orchestrator | 2026-01-01 01:05:44.692144 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-01-01 01:05:44.692155 | orchestrator | Thursday 01 January 2026 01:04:36 +0000 (0:00:00.710) 0:00:02.473 ****** 2026-01-01 01:05:44.692166 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (5 retries left). 2026-01-01 01:05:44.692177 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (4 retries left). 2026-01-01 01:05:44.692189 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (3 retries left). 2026-01-01 01:05:44.692200 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (2 retries left). 2026-01-01 01:05:44.692211 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (1 retries left). 2026-01-01 01:05:44.692330 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767229541.5709524-3389-233516054180490/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767229541.5709524-3389-233516054180490/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767229541.5709524-3389-233516054180490/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_if9d7a50/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_if9d7a50/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_if9d7a50/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_if9d7a50/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_if9d7a50/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-01 01:05:44.692380 | orchestrator | 2026-01-01 01:05:44.692400 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:05:44.692412 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-01 01:05:44.692424 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:05:44.692437 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:05:44.692448 | orchestrator | 2026-01-01 01:05:44.692459 | orchestrator | 2026-01-01 01:05:44.692470 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:05:44.692482 | orchestrator | Thursday 01 January 2026 01:05:43 +0000 (0:01:06.172) 0:01:08.645 ****** 2026-01-01 01:05:44.692495 | orchestrator | =============================================================================== 2026-01-01 01:05:44.692508 | orchestrator | service-ks-register : designate | Creating/deleting services ----------- 66.17s 2026-01-01 01:05:44.692521 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.81s 2026-01-01 01:05:44.692534 | orchestrator | designate : include_tasks ----------------------------------------------- 0.71s 2026-01-01 01:05:44.692547 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-01-01 01:05:44.693541 | orchestrator | 2026-01-01 01:05:44 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:44.697368 | orchestrator | 2026-01-01 01:05:44 | INFO  | Task bcb8c9cb-d8ec-42aa-9f9e-b00f2fc47e00 is in state SUCCESS 2026-01-01 01:05:44.702117 | orchestrator | 2026-01-01 01:05:44 | INFO  | Task 56398dbb-fd39-43c0-9dcc-261074d8da84 is in state SUCCESS 2026-01-01 01:05:44.703533 | orchestrator | 2026-01-01 01:05:44.703614 | orchestrator | 2026-01-01 01:05:44.703635 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-01 01:05:44.703651 | orchestrator | 2026-01-01 01:05:44.703665 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-01 01:05:44.703680 | orchestrator | Thursday 01 January 2026 01:04:45 +0000 (0:00:00.257) 0:00:00.257 ****** 2026-01-01 01:05:44.703694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-01 01:05:44.703709 | orchestrator | 2026-01-01 01:05:44.703740 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-01 01:05:44.703755 | orchestrator | Thursday 01 January 2026 01:04:45 +0000 (0:00:00.279) 0:00:00.537 ****** 2026-01-01 01:05:44.703769 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-01 01:05:44.703803 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-01 01:05:44.703819 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-01 01:05:44.703832 | orchestrator | 2026-01-01 01:05:44.703846 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-01 01:05:44.703859 | orchestrator | Thursday 01 January 2026 01:04:47 +0000 (0:00:01.331) 0:00:01.869 ****** 2026-01-01 01:05:44.703873 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-01 01:05:44.703887 | orchestrator | 2026-01-01 01:05:44.703901 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-01 01:05:44.703914 | orchestrator | Thursday 01 January 2026 01:04:48 +0000 (0:00:01.539) 0:00:03.408 ****** 2026-01-01 01:05:44.703928 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:44.703941 | orchestrator | 2026-01-01 01:05:44.703955 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-01 01:05:44.703969 | orchestrator | Thursday 01 January 2026 01:04:49 +0000 (0:00:00.968) 0:00:04.377 ****** 2026-01-01 01:05:44.703983 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:44.703997 | orchestrator | 2026-01-01 01:05:44.704011 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-01 01:05:44.704026 | orchestrator | Thursday 01 January 2026 01:04:50 +0000 (0:00:00.939) 0:00:05.316 ****** 2026-01-01 01:05:44.704039 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-01 01:05:44.704196 | orchestrator | ok: [testbed-manager] 2026-01-01 01:05:44.704213 | orchestrator | 2026-01-01 01:05:44.704226 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-01 01:05:44.704360 | orchestrator | Thursday 01 January 2026 01:05:33 +0000 (0:00:42.389) 0:00:47.706 ****** 2026-01-01 01:05:44.704377 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-01 01:05:44.704391 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-01 01:05:44.704406 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-01 01:05:44.704421 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-01 01:05:44.704434 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-01 01:05:44.704448 | orchestrator | 2026-01-01 01:05:44.704462 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-01 01:05:44.704475 | orchestrator | Thursday 01 January 2026 01:05:37 +0000 (0:00:04.574) 0:00:52.280 ****** 2026-01-01 01:05:44.704488 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-01 01:05:44.704502 | orchestrator | 2026-01-01 01:05:44.704516 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-01 01:05:44.704529 | orchestrator | Thursday 01 January 2026 01:05:38 +0000 (0:00:00.461) 0:00:52.742 ****** 2026-01-01 01:05:44.704542 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:05:44.704556 | orchestrator | 2026-01-01 01:05:44.704569 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-01 01:05:44.704583 | orchestrator | Thursday 01 January 2026 01:05:38 +0000 (0:00:00.132) 0:00:52.874 ****** 2026-01-01 01:05:44.704597 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:05:44.704611 | orchestrator | 2026-01-01 01:05:44.704624 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-01 01:05:44.704637 | orchestrator | Thursday 01 January 2026 01:05:38 +0000 (0:00:00.651) 0:00:53.526 ****** 2026-01-01 01:05:44.704650 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:44.704664 | orchestrator | 2026-01-01 01:05:44.704677 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-01 01:05:44.704691 | orchestrator | Thursday 01 January 2026 01:05:40 +0000 (0:00:01.632) 0:00:55.158 ****** 2026-01-01 01:05:44.704705 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:44.704718 | orchestrator | 2026-01-01 01:05:44.704732 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-01 01:05:44.704759 | orchestrator | Thursday 01 January 2026 01:05:41 +0000 (0:00:00.847) 0:00:56.005 ****** 2026-01-01 01:05:44.704774 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:44.704787 | orchestrator | 2026-01-01 01:05:44.704800 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-01 01:05:44.704813 | orchestrator | Thursday 01 January 2026 01:05:41 +0000 (0:00:00.553) 0:00:56.559 ****** 2026-01-01 01:05:44.704827 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-01 01:05:44.704841 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-01 01:05:44.704855 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-01 01:05:44.704867 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-01 01:05:44.704882 | orchestrator | 2026-01-01 01:05:44.704895 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:05:44.704908 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 01:05:44.704923 | orchestrator | 2026-01-01 01:05:44.704936 | orchestrator | 2026-01-01 01:05:44.704968 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:05:44.704981 | orchestrator | Thursday 01 January 2026 01:05:43 +0000 (0:00:01.463) 0:00:58.022 ****** 2026-01-01 01:05:44.704994 | orchestrator | =============================================================================== 2026-01-01 01:05:44.705007 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.39s 2026-01-01 01:05:44.705021 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.57s 2026-01-01 01:05:44.705035 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.63s 2026-01-01 01:05:44.705058 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.54s 2026-01-01 01:05:44.705073 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.46s 2026-01-01 01:05:44.705087 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.33s 2026-01-01 01:05:44.705100 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2026-01-01 01:05:44.705113 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.94s 2026-01-01 01:05:44.705127 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.85s 2026-01-01 01:05:44.705139 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.65s 2026-01-01 01:05:44.705152 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.55s 2026-01-01 01:05:44.705165 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2026-01-01 01:05:44.705178 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.28s 2026-01-01 01:05:44.705191 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-01-01 01:05:44.705205 | orchestrator | 2026-01-01 01:05:44.705219 | orchestrator | 2026-01-01 01:05:44.705232 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:05:44.705247 | orchestrator | 2026-01-01 01:05:44.705259 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:05:44.705292 | orchestrator | Thursday 01 January 2026 01:04:35 +0000 (0:00:00.726) 0:00:00.726 ****** 2026-01-01 01:05:44.705307 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:44.705319 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:44.705332 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:44.705345 | orchestrator | 2026-01-01 01:05:44.705360 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:05:44.705374 | orchestrator | Thursday 01 January 2026 01:04:35 +0000 (0:00:00.426) 0:00:01.152 ****** 2026-01-01 01:05:44.705387 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-01 01:05:44.705400 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-01 01:05:44.705414 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-01 01:05:44.705439 | orchestrator | 2026-01-01 01:05:44.705452 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-01 01:05:44.705467 | orchestrator | 2026-01-01 01:05:44.705481 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-01 01:05:44.705494 | orchestrator | Thursday 01 January 2026 01:04:36 +0000 (0:00:00.553) 0:00:01.706 ****** 2026-01-01 01:05:44.705507 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:05:44.705521 | orchestrator | 2026-01-01 01:05:44.705534 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-01-01 01:05:44.705548 | orchestrator | Thursday 01 January 2026 01:04:36 +0000 (0:00:00.692) 0:00:02.398 ****** 2026-01-01 01:05:44.705562 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (5 retries left). 2026-01-01 01:05:44.705576 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (4 retries left). 2026-01-01 01:05:44.705590 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (3 retries left). 2026-01-01 01:05:44.705603 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (2 retries left). 2026-01-01 01:05:44.705617 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (1 retries left). 2026-01-01 01:05:44.705677 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767229542.1317031-3410-181461357404006/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767229542.1317031-3410-181461357404006/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767229542.1317031-3410-181461357404006/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_s8_yw9xx/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_s8_yw9xx/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_s8_yw9xx/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_s8_yw9xx/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_s8_yw9xx/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-01 01:05:44.705707 | orchestrator | 2026-01-01 01:05:44.705721 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:05:44.705735 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-01 01:05:44.705758 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:05:44.705773 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:05:44.705787 | orchestrator | 2026-01-01 01:05:44.705801 | orchestrator | 2026-01-01 01:05:44.705815 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:05:44.705828 | orchestrator | Thursday 01 January 2026 01:05:43 +0000 (0:01:06.616) 0:01:09.014 ****** 2026-01-01 01:05:44.705843 | orchestrator | =============================================================================== 2026-01-01 01:05:44.705857 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------ 66.62s 2026-01-01 01:05:44.705870 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.69s 2026-01-01 01:05:44.705883 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2026-01-01 01:05:44.705897 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2026-01-01 01:05:44.705910 | orchestrator | 2026-01-01 01:05:44 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:05:44.705923 | orchestrator | 2026-01-01 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:47.783629 | orchestrator | 2026-01-01 01:05:47 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:47.786240 | orchestrator | 2026-01-01 01:05:47 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state STARTED 2026-01-01 01:05:47.788360 | orchestrator | 2026-01-01 01:05:47 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:05:47.790858 | orchestrator | 2026-01-01 01:05:47 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:05:47.792434 | orchestrator | 2026-01-01 01:05:47 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:05:47.792477 | orchestrator | 2026-01-01 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:50.838819 | orchestrator | 2026-01-01 01:05:50 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:50.838946 | orchestrator | 2026-01-01 01:05:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:05:50.841150 | orchestrator | 2026-01-01 01:05:50.841212 | orchestrator | 2026-01-01 01:05:50.841233 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:05:50.841254 | orchestrator | 2026-01-01 01:05:50.841352 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:05:50.841374 | orchestrator | Thursday 01 January 2026 01:04:34 +0000 (0:00:00.352) 0:00:00.352 ****** 2026-01-01 01:05:50.841393 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:50.841414 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:50.841431 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:50.841449 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:05:50.841466 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:05:50.841484 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:05:50.841502 | orchestrator | 2026-01-01 01:05:50.841519 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:05:50.841539 | orchestrator | Thursday 01 January 2026 01:04:35 +0000 (0:00:01.001) 0:00:01.354 ****** 2026-01-01 01:05:50.841557 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-01 01:05:50.841577 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-01 01:05:50.841595 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-01 01:05:50.841613 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-01 01:05:50.841662 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-01 01:05:50.841680 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-01 01:05:50.841698 | orchestrator | 2026-01-01 01:05:50.841734 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-01 01:05:50.841754 | orchestrator | 2026-01-01 01:05:50.841773 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-01 01:05:50.841793 | orchestrator | Thursday 01 January 2026 01:04:36 +0000 (0:00:00.876) 0:00:02.230 ****** 2026-01-01 01:05:50.841813 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:05:50.841834 | orchestrator | 2026-01-01 01:05:50.841853 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-01 01:05:50.841870 | orchestrator | Thursday 01 January 2026 01:04:37 +0000 (0:00:01.342) 0:00:03.573 ****** 2026-01-01 01:05:50.841889 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:50.841908 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:05:50.841928 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:50.841948 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:50.841968 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:05:50.841987 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:05:50.842007 | orchestrator | 2026-01-01 01:05:50.842097 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-01 01:05:50.842119 | orchestrator | Thursday 01 January 2026 01:04:39 +0000 (0:00:01.387) 0:00:04.961 ****** 2026-01-01 01:05:50.842140 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:50.842161 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:50.842227 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:50.842247 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:05:50.842266 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:05:50.842310 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:05:50.842329 | orchestrator | 2026-01-01 01:05:50.842347 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-01 01:05:50.842366 | orchestrator | Thursday 01 January 2026 01:04:40 +0000 (0:00:01.195) 0:00:06.157 ****** 2026-01-01 01:05:50.842385 | orchestrator | ok: [testbed-node-0] => { 2026-01-01 01:05:50.842405 | orchestrator |  "changed": false, 2026-01-01 01:05:50.842424 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:05:50.842443 | orchestrator | } 2026-01-01 01:05:50.842463 | orchestrator | ok: [testbed-node-1] => { 2026-01-01 01:05:50.842481 | orchestrator |  "changed": false, 2026-01-01 01:05:50.842500 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:05:50.842518 | orchestrator | } 2026-01-01 01:05:50.842537 | orchestrator | ok: [testbed-node-2] => { 2026-01-01 01:05:50.842555 | orchestrator |  "changed": false, 2026-01-01 01:05:50.842573 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:05:50.842592 | orchestrator | } 2026-01-01 01:05:50.842606 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 01:05:50.842616 | orchestrator |  "changed": false, 2026-01-01 01:05:50.842627 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:05:50.842656 | orchestrator | } 2026-01-01 01:05:50.842667 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 01:05:50.842678 | orchestrator |  "changed": false, 2026-01-01 01:05:50.842700 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:05:50.842711 | orchestrator | } 2026-01-01 01:05:50.842722 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 01:05:50.842733 | orchestrator |  "changed": false, 2026-01-01 01:05:50.842744 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:05:50.842754 | orchestrator | } 2026-01-01 01:05:50.842765 | orchestrator | 2026-01-01 01:05:50.842776 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-01 01:05:50.842787 | orchestrator | Thursday 01 January 2026 01:04:41 +0000 (0:00:00.998) 0:00:07.155 ****** 2026-01-01 01:05:50.842798 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:50.842809 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:50.842834 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:50.842845 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:50.842857 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:50.842868 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:50.842879 | orchestrator | 2026-01-01 01:05:50.842890 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-01-01 01:05:50.842901 | orchestrator | Thursday 01 January 2026 01:04:42 +0000 (0:00:00.689) 0:00:07.845 ****** 2026-01-01 01:05:50.842913 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (5 retries left). 2026-01-01 01:05:50.843070 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (4 retries left). 2026-01-01 01:05:50.843085 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (3 retries left). 2026-01-01 01:05:50.843096 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (2 retries left). 2026-01-01 01:05:50.843107 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (1 retries left). 2026-01-01 01:05:50.843181 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767229546.5064995-3448-269423382249901/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767229546.5064995-3448-269423382249901/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767229546.5064995-3448-269423382249901/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_1v_t2hcs/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_1v_t2hcs/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_1v_t2hcs/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_1v_t2hcs/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_1v_t2hcs/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-01 01:05:50.843209 | orchestrator | 2026-01-01 01:05:50.843221 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:05:50.843233 | orchestrator | testbed-node-0 : ok=6  changed=0 unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-01-01 01:05:50.843245 | orchestrator | testbed-node-1 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:05:50.843258 | orchestrator | testbed-node-2 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:05:50.843310 | orchestrator | testbed-node-3 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:05:50.843330 | orchestrator | testbed-node-4 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:05:50.843348 | orchestrator | testbed-node-5 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:05:50.843366 | orchestrator | 2026-01-01 01:05:50.843382 | orchestrator | 2026-01-01 01:05:50.843399 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:05:50.843415 | orchestrator | Thursday 01 January 2026 01:05:47 +0000 (0:01:05.734) 0:01:13.580 ****** 2026-01-01 01:05:50.843433 | orchestrator | =============================================================================== 2026-01-01 01:05:50.843450 | orchestrator | service-ks-register : neutron | Creating/deleting services ------------- 65.74s 2026-01-01 01:05:50.843469 | orchestrator | neutron : Get container facts ------------------------------------------- 1.39s 2026-01-01 01:05:50.843487 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.34s 2026-01-01 01:05:50.843507 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.20s 2026-01-01 01:05:50.843526 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.01s 2026-01-01 01:05:50.843543 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 1.00s 2026-01-01 01:05:50.843574 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-01-01 01:05:50.843594 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.69s 2026-01-01 01:05:50.843613 | orchestrator | 2026-01-01 01:05:50 | INFO  | Task dd2ce329-2d5e-475d-bda9-860460158b06 is in state SUCCESS 2026-01-01 01:05:50.843633 | orchestrator | 2026-01-01 01:05:50 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:05:50.846360 | orchestrator | 2026-01-01 01:05:50 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:05:50.848127 | orchestrator | 2026-01-01 01:05:50 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:05:50.848169 | orchestrator | 2026-01-01 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:53.900389 | orchestrator | 2026-01-01 01:05:53 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:53.900614 | orchestrator | 2026-01-01 01:05:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:05:53.901535 | orchestrator | 2026-01-01 01:05:53 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:05:53.903717 | orchestrator | 2026-01-01 01:05:53 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:05:53.904490 | orchestrator | 2026-01-01 01:05:53 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:05:53.904531 | orchestrator | 2026-01-01 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:56.946397 | orchestrator | 2026-01-01 01:05:56 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:05:56.946962 | orchestrator | 2026-01-01 01:05:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:05:56.948370 | orchestrator | 2026-01-01 01:05:56 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:05:56.950380 | orchestrator | 2026-01-01 01:05:56 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:05:56.951644 | orchestrator | 2026-01-01 01:05:56 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:05:56.951672 | orchestrator | 2026-01-01 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:00.011832 | orchestrator | 2026-01-01 01:06:00 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:00.014853 | orchestrator | 2026-01-01 01:06:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:00.018621 | orchestrator | 2026-01-01 01:06:00 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:00.021122 | orchestrator | 2026-01-01 01:06:00 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:00.022374 | orchestrator | 2026-01-01 01:06:00 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:00.022889 | orchestrator | 2026-01-01 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:03.077535 | orchestrator | 2026-01-01 01:06:03 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:03.080210 | orchestrator | 2026-01-01 01:06:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:03.083790 | orchestrator | 2026-01-01 01:06:03 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:03.086383 | orchestrator | 2026-01-01 01:06:03 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:03.088439 | orchestrator | 2026-01-01 01:06:03 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:03.088790 | orchestrator | 2026-01-01 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:06.139653 | orchestrator | 2026-01-01 01:06:06 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:06.140682 | orchestrator | 2026-01-01 01:06:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:06.145383 | orchestrator | 2026-01-01 01:06:06 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:06.150329 | orchestrator | 2026-01-01 01:06:06 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:06.153823 | orchestrator | 2026-01-01 01:06:06 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:06.154374 | orchestrator | 2026-01-01 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:09.198476 | orchestrator | 2026-01-01 01:06:09 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:09.200779 | orchestrator | 2026-01-01 01:06:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:09.202829 | orchestrator | 2026-01-01 01:06:09 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:09.205482 | orchestrator | 2026-01-01 01:06:09 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:09.207155 | orchestrator | 2026-01-01 01:06:09 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:09.207337 | orchestrator | 2026-01-01 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:12.252334 | orchestrator | 2026-01-01 01:06:12 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:12.254321 | orchestrator | 2026-01-01 01:06:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:12.255770 | orchestrator | 2026-01-01 01:06:12 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:12.257745 | orchestrator | 2026-01-01 01:06:12 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:12.259762 | orchestrator | 2026-01-01 01:06:12 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:12.259816 | orchestrator | 2026-01-01 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:15.310686 | orchestrator | 2026-01-01 01:06:15 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:15.312504 | orchestrator | 2026-01-01 01:06:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:15.315640 | orchestrator | 2026-01-01 01:06:15 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:15.317790 | orchestrator | 2026-01-01 01:06:15 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:15.320385 | orchestrator | 2026-01-01 01:06:15 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:15.320418 | orchestrator | 2026-01-01 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:18.372767 | orchestrator | 2026-01-01 01:06:18 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:18.375388 | orchestrator | 2026-01-01 01:06:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:18.377334 | orchestrator | 2026-01-01 01:06:18 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:18.379885 | orchestrator | 2026-01-01 01:06:18 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:18.382428 | orchestrator | 2026-01-01 01:06:18 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:18.382479 | orchestrator | 2026-01-01 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:21.423516 | orchestrator | 2026-01-01 01:06:21 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:21.424977 | orchestrator | 2026-01-01 01:06:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:21.427199 | orchestrator | 2026-01-01 01:06:21 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:21.430116 | orchestrator | 2026-01-01 01:06:21 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:21.431942 | orchestrator | 2026-01-01 01:06:21 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:21.431970 | orchestrator | 2026-01-01 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:24.483433 | orchestrator | 2026-01-01 01:06:24 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:24.486469 | orchestrator | 2026-01-01 01:06:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:24.488870 | orchestrator | 2026-01-01 01:06:24 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:24.492380 | orchestrator | 2026-01-01 01:06:24 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:24.496633 | orchestrator | 2026-01-01 01:06:24 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:24.496689 | orchestrator | 2026-01-01 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:27.544420 | orchestrator | 2026-01-01 01:06:27 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:27.546180 | orchestrator | 2026-01-01 01:06:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:27.548874 | orchestrator | 2026-01-01 01:06:27 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:27.550625 | orchestrator | 2026-01-01 01:06:27 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:27.552748 | orchestrator | 2026-01-01 01:06:27 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:27.552828 | orchestrator | 2026-01-01 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:30.606699 | orchestrator | 2026-01-01 01:06:30 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:30.606857 | orchestrator | 2026-01-01 01:06:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:30.607798 | orchestrator | 2026-01-01 01:06:30 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:30.608800 | orchestrator | 2026-01-01 01:06:30 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:30.610366 | orchestrator | 2026-01-01 01:06:30 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:30.610442 | orchestrator | 2026-01-01 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:33.660131 | orchestrator | 2026-01-01 01:06:33 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:33.662132 | orchestrator | 2026-01-01 01:06:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:33.663897 | orchestrator | 2026-01-01 01:06:33 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:33.665686 | orchestrator | 2026-01-01 01:06:33 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:33.668106 | orchestrator | 2026-01-01 01:06:33 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:33.668151 | orchestrator | 2026-01-01 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:36.716539 | orchestrator | 2026-01-01 01:06:36 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:36.717922 | orchestrator | 2026-01-01 01:06:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:36.719705 | orchestrator | 2026-01-01 01:06:36 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:36.721344 | orchestrator | 2026-01-01 01:06:36 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:36.722755 | orchestrator | 2026-01-01 01:06:36 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:36.723028 | orchestrator | 2026-01-01 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:39.772171 | orchestrator | 2026-01-01 01:06:39 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:39.773513 | orchestrator | 2026-01-01 01:06:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:39.776456 | orchestrator | 2026-01-01 01:06:39 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:39.779370 | orchestrator | 2026-01-01 01:06:39 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:39.782114 | orchestrator | 2026-01-01 01:06:39 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:39.782183 | orchestrator | 2026-01-01 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:42.834654 | orchestrator | 2026-01-01 01:06:42 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:42.836864 | orchestrator | 2026-01-01 01:06:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:42.839466 | orchestrator | 2026-01-01 01:06:42 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:42.841149 | orchestrator | 2026-01-01 01:06:42 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:42.843813 | orchestrator | 2026-01-01 01:06:42 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:42.843909 | orchestrator | 2026-01-01 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:45.897631 | orchestrator | 2026-01-01 01:06:45 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:45.900342 | orchestrator | 2026-01-01 01:06:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:45.903303 | orchestrator | 2026-01-01 01:06:45 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:45.905711 | orchestrator | 2026-01-01 01:06:45 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:45.908533 | orchestrator | 2026-01-01 01:06:45 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:45.908622 | orchestrator | 2026-01-01 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:48.956384 | orchestrator | 2026-01-01 01:06:48 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:48.957634 | orchestrator | 2026-01-01 01:06:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:48.959929 | orchestrator | 2026-01-01 01:06:48 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:48.963403 | orchestrator | 2026-01-01 01:06:48 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:48.967770 | orchestrator | 2026-01-01 01:06:48 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:48.967818 | orchestrator | 2026-01-01 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:52.021326 | orchestrator | 2026-01-01 01:06:52 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:52.024084 | orchestrator | 2026-01-01 01:06:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:52.026297 | orchestrator | 2026-01-01 01:06:52 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:52.029144 | orchestrator | 2026-01-01 01:06:52 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:52.032273 | orchestrator | 2026-01-01 01:06:52 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:52.032344 | orchestrator | 2026-01-01 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:55.086951 | orchestrator | 2026-01-01 01:06:55 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:55.088975 | orchestrator | 2026-01-01 01:06:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:55.090350 | orchestrator | 2026-01-01 01:06:55 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state STARTED 2026-01-01 01:06:55.092093 | orchestrator | 2026-01-01 01:06:55 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:55.093815 | orchestrator | 2026-01-01 01:06:55 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state STARTED 2026-01-01 01:06:55.093844 | orchestrator | 2026-01-01 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:58.152732 | orchestrator | 2026-01-01 01:06:58 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:06:58.154585 | orchestrator | 2026-01-01 01:06:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:06:58.156274 | orchestrator | 2026-01-01 01:06:58 | INFO  | Task 512e8544-21e6-45ec-a794-bdd5fd92bc50 is in state SUCCESS 2026-01-01 01:06:58.156320 | orchestrator | 2026-01-01 01:06:58.157034 | orchestrator | 2026-01-01 01:06:58.157084 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:06:58.157105 | orchestrator | 2026-01-01 01:06:58.157123 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:06:58.157141 | orchestrator | Thursday 01 January 2026 01:05:47 +0000 (0:00:00.268) 0:00:00.268 ****** 2026-01-01 01:06:58.157158 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:06:58.157177 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:06:58.157194 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:06:58.157204 | orchestrator | 2026-01-01 01:06:58.157215 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:06:58.157259 | orchestrator | Thursday 01 January 2026 01:05:48 +0000 (0:00:00.362) 0:00:00.630 ****** 2026-01-01 01:06:58.157270 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-01 01:06:58.157281 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-01 01:06:58.157291 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-01 01:06:58.157301 | orchestrator | 2026-01-01 01:06:58.157315 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-01 01:06:58.157332 | orchestrator | 2026-01-01 01:06:58.157349 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-01 01:06:58.157364 | orchestrator | Thursday 01 January 2026 01:05:48 +0000 (0:00:00.505) 0:00:01.135 ****** 2026-01-01 01:06:58.157380 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:06:58.157398 | orchestrator | 2026-01-01 01:06:58.157414 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-01-01 01:06:58.157431 | orchestrator | Thursday 01 January 2026 01:05:49 +0000 (0:00:00.659) 0:00:01.794 ****** 2026-01-01 01:06:58.157448 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (5 retries left). 2026-01-01 01:06:58.157465 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (4 retries left). 2026-01-01 01:06:58.157482 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (3 retries left). 2026-01-01 01:06:58.157500 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (2 retries left). 2026-01-01 01:06:58.157538 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (1 retries left). 2026-01-01 01:06:58.157602 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767229614.3012645-3852-251488489473833/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767229614.3012645-3852-251488489473833/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767229614.3012645-3852-251488489473833/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_bmbkd64k/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_bmbkd64k/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_bmbkd64k/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_bmbkd64k/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_bmbkd64k/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-01 01:06:58.157661 | orchestrator | 2026-01-01 01:06:58.157682 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:06:58.157695 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-01 01:06:58.157713 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:06:58.157732 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:06:58.157750 | orchestrator | 2026-01-01 01:06:58.157766 | orchestrator | 2026-01-01 01:06:58.157783 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:06:58.157799 | orchestrator | Thursday 01 January 2026 01:06:55 +0000 (0:01:06.517) 0:01:08.312 ****** 2026-01-01 01:06:58.157816 | orchestrator | =============================================================================== 2026-01-01 01:06:58.157833 | orchestrator | service-ks-register : placement | Creating/deleting services ----------- 66.52s 2026-01-01 01:06:58.157851 | orchestrator | placement : include_tasks ----------------------------------------------- 0.66s 2026-01-01 01:06:58.157868 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-01-01 01:06:58.157885 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-01-01 01:06:58.159390 | orchestrator | 2026-01-01 01:06:58 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:06:58.161001 | orchestrator | 2026-01-01 01:06:58.161065 | orchestrator | 2026-01-01 01:06:58 | INFO  | Task 442ba250-13f3-473e-84b7-6909a144f57c is in state SUCCESS 2026-01-01 01:06:58.162143 | orchestrator | 2026-01-01 01:06:58.162195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:06:58.162207 | orchestrator | 2026-01-01 01:06:58.162217 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:06:58.162263 | orchestrator | Thursday 01 January 2026 01:05:48 +0000 (0:00:00.315) 0:00:00.315 ****** 2026-01-01 01:06:58.162274 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:06:58.162285 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:06:58.162314 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:06:58.162324 | orchestrator | 2026-01-01 01:06:58.162347 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:06:58.162357 | orchestrator | Thursday 01 January 2026 01:05:49 +0000 (0:00:00.381) 0:00:00.697 ****** 2026-01-01 01:06:58.162367 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-01 01:06:58.162378 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-01 01:06:58.162387 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-01 01:06:58.162397 | orchestrator | 2026-01-01 01:06:58.162407 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-01 01:06:58.162416 | orchestrator | 2026-01-01 01:06:58.162426 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-01 01:06:58.162436 | orchestrator | Thursday 01 January 2026 01:05:49 +0000 (0:00:00.491) 0:00:01.189 ****** 2026-01-01 01:06:58.162446 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:06:58.162456 | orchestrator | 2026-01-01 01:06:58.162466 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-01-01 01:06:58.162476 | orchestrator | Thursday 01 January 2026 01:05:50 +0000 (0:00:00.645) 0:00:01.834 ****** 2026-01-01 01:06:58.162485 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (5 retries left). 2026-01-01 01:06:58.162501 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (4 retries left). 2026-01-01 01:06:58.162518 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (3 retries left). 2026-01-01 01:06:58.162535 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (2 retries left). 2026-01-01 01:06:58.162550 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (1 retries left). 2026-01-01 01:06:58.162616 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"action": "openstack.cloud.catalog_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 136, in _do_create_plugin\n disc = self.get_discovery(\n ^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 703, in get_discovery\n return discover.get_discovery(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1742, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 585, in __init__\n self._data = get_version_data(\n ^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 114, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1320, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1118, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767229614.9148917-3873-161488331699348/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767229614.9148917-3873-161488331699348/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767229614.9148917-3873-161488331699348/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7v3ztz1g/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7v3ztz1g/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7v3ztz1g/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7v3ztz1g/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7v3ztz1g/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 91, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 403, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1478, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 573, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 296, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 139, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 221, in get_auth_ref\n plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 163, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-01 01:06:58.162660 | orchestrator | 2026-01-01 01:06:58.162676 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:06:58.162692 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-01 01:06:58.162709 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:06:58.162732 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:06:58.162748 | orchestrator | 2026-01-01 01:06:58.162762 | orchestrator | 2026-01-01 01:06:58.162777 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:06:58.162791 | orchestrator | Thursday 01 January 2026 01:06:56 +0000 (0:01:06.249) 0:01:08.083 ****** 2026-01-01 01:06:58.162806 | orchestrator | =============================================================================== 2026-01-01 01:06:58.162822 | orchestrator | service-ks-register : magnum | Creating/deleting services -------------- 66.25s 2026-01-01 01:06:58.162837 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.65s 2026-01-01 01:06:58.162853 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2026-01-01 01:06:58.162868 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-01-01 01:06:58.163521 | orchestrator | 2026-01-01 01:06:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:06:58.164767 | orchestrator | 2026-01-01 01:06:58 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:06:58.165019 | orchestrator | 2026-01-01 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:01.226855 | orchestrator | 2026-01-01 01:07:01 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:01.229162 | orchestrator | 2026-01-01 01:07:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:01.233360 | orchestrator | 2026-01-01 01:07:01 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:07:01.234494 | orchestrator | 2026-01-01 01:07:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:01.238841 | orchestrator | 2026-01-01 01:07:01 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:01.238892 | orchestrator | 2026-01-01 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:04.285042 | orchestrator | 2026-01-01 01:07:04 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:04.285140 | orchestrator | 2026-01-01 01:07:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:04.285282 | orchestrator | 2026-01-01 01:07:04 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:07:04.286329 | orchestrator | 2026-01-01 01:07:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:04.287035 | orchestrator | 2026-01-01 01:07:04 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:04.287064 | orchestrator | 2026-01-01 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:07.340160 | orchestrator | 2026-01-01 01:07:07 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:07.341761 | orchestrator | 2026-01-01 01:07:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:07.343609 | orchestrator | 2026-01-01 01:07:07 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:07:07.345183 | orchestrator | 2026-01-01 01:07:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:07.347139 | orchestrator | 2026-01-01 01:07:07 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:07.347328 | orchestrator | 2026-01-01 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:10.406553 | orchestrator | 2026-01-01 01:07:10 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:10.408195 | orchestrator | 2026-01-01 01:07:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:10.410146 | orchestrator | 2026-01-01 01:07:10 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:07:10.411665 | orchestrator | 2026-01-01 01:07:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:10.413547 | orchestrator | 2026-01-01 01:07:10 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:10.413582 | orchestrator | 2026-01-01 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:13.462719 | orchestrator | 2026-01-01 01:07:13 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:13.462829 | orchestrator | 2026-01-01 01:07:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:13.462859 | orchestrator | 2026-01-01 01:07:13 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:07:13.463634 | orchestrator | 2026-01-01 01:07:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:13.464480 | orchestrator | 2026-01-01 01:07:13 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:13.464528 | orchestrator | 2026-01-01 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:16.504939 | orchestrator | 2026-01-01 01:07:16 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:16.505167 | orchestrator | 2026-01-01 01:07:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:16.505821 | orchestrator | 2026-01-01 01:07:16 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:07:16.506708 | orchestrator | 2026-01-01 01:07:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:16.507274 | orchestrator | 2026-01-01 01:07:16 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:16.507312 | orchestrator | 2026-01-01 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:19.549950 | orchestrator | 2026-01-01 01:07:19 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:19.550116 | orchestrator | 2026-01-01 01:07:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:19.550134 | orchestrator | 2026-01-01 01:07:19 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state STARTED 2026-01-01 01:07:19.550910 | orchestrator | 2026-01-01 01:07:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:19.551423 | orchestrator | 2026-01-01 01:07:19 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:19.551486 | orchestrator | 2026-01-01 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:22.588093 | orchestrator | 2026-01-01 01:07:22 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:22.590833 | orchestrator | 2026-01-01 01:07:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:22.592879 | orchestrator | 2026-01-01 01:07:22 | INFO  | Task 476ab972-e90c-4e97-aad6-f11edd00c06e is in state SUCCESS 2026-01-01 01:07:22.595230 | orchestrator | 2026-01-01 01:07:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:22.597695 | orchestrator | 2026-01-01 01:07:22 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:22.597740 | orchestrator | 2026-01-01 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:25.637835 | orchestrator | 2026-01-01 01:07:25 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:25.638182 | orchestrator | 2026-01-01 01:07:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:25.639339 | orchestrator | 2026-01-01 01:07:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:25.640561 | orchestrator | 2026-01-01 01:07:25 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:25.640606 | orchestrator | 2026-01-01 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:28.682648 | orchestrator | 2026-01-01 01:07:28 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:28.683296 | orchestrator | 2026-01-01 01:07:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:28.685017 | orchestrator | 2026-01-01 01:07:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:28.685702 | orchestrator | 2026-01-01 01:07:28 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:28.685834 | orchestrator | 2026-01-01 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:31.723167 | orchestrator | 2026-01-01 01:07:31 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:31.725166 | orchestrator | 2026-01-01 01:07:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:31.726832 | orchestrator | 2026-01-01 01:07:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:31.728354 | orchestrator | 2026-01-01 01:07:31 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:31.728381 | orchestrator | 2026-01-01 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:34.783873 | orchestrator | 2026-01-01 01:07:34 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:34.786101 | orchestrator | 2026-01-01 01:07:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:34.788773 | orchestrator | 2026-01-01 01:07:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:34.790795 | orchestrator | 2026-01-01 01:07:34 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:34.790963 | orchestrator | 2026-01-01 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:37.847783 | orchestrator | 2026-01-01 01:07:37 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:37.850916 | orchestrator | 2026-01-01 01:07:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:37.852714 | orchestrator | 2026-01-01 01:07:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:37.854584 | orchestrator | 2026-01-01 01:07:37 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:37.854953 | orchestrator | 2026-01-01 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:40.917530 | orchestrator | 2026-01-01 01:07:40 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:40.919660 | orchestrator | 2026-01-01 01:07:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:40.923228 | orchestrator | 2026-01-01 01:07:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:40.927006 | orchestrator | 2026-01-01 01:07:40 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:40.927079 | orchestrator | 2026-01-01 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:43.978720 | orchestrator | 2026-01-01 01:07:43 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:43.980501 | orchestrator | 2026-01-01 01:07:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:43.982817 | orchestrator | 2026-01-01 01:07:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:43.984953 | orchestrator | 2026-01-01 01:07:43 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:43.985025 | orchestrator | 2026-01-01 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:47.029743 | orchestrator | 2026-01-01 01:07:47 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:47.031632 | orchestrator | 2026-01-01 01:07:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:47.033620 | orchestrator | 2026-01-01 01:07:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:47.035409 | orchestrator | 2026-01-01 01:07:47 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:47.035439 | orchestrator | 2026-01-01 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:50.067308 | orchestrator | 2026-01-01 01:07:50 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:50.067614 | orchestrator | 2026-01-01 01:07:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:50.069574 | orchestrator | 2026-01-01 01:07:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:50.071992 | orchestrator | 2026-01-01 01:07:50 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:50.072088 | orchestrator | 2026-01-01 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:53.107382 | orchestrator | 2026-01-01 01:07:53 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:53.108010 | orchestrator | 2026-01-01 01:07:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:53.108710 | orchestrator | 2026-01-01 01:07:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:53.109786 | orchestrator | 2026-01-01 01:07:53 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:53.109842 | orchestrator | 2026-01-01 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:56.259358 | orchestrator | 2026-01-01 01:07:56 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:56.261160 | orchestrator | 2026-01-01 01:07:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:56.264492 | orchestrator | 2026-01-01 01:07:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:56.267716 | orchestrator | 2026-01-01 01:07:56 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:56.267768 | orchestrator | 2026-01-01 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:59.306565 | orchestrator | 2026-01-01 01:07:59 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:07:59.306873 | orchestrator | 2026-01-01 01:07:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:07:59.308589 | orchestrator | 2026-01-01 01:07:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:07:59.309410 | orchestrator | 2026-01-01 01:07:59 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:07:59.310160 | orchestrator | 2026-01-01 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:02.354405 | orchestrator | 2026-01-01 01:08:02 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:02.358278 | orchestrator | 2026-01-01 01:08:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:02.359706 | orchestrator | 2026-01-01 01:08:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:02.364912 | orchestrator | 2026-01-01 01:08:02 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:02.366053 | orchestrator | 2026-01-01 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:05.408308 | orchestrator | 2026-01-01 01:08:05 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:05.410522 | orchestrator | 2026-01-01 01:08:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:05.412298 | orchestrator | 2026-01-01 01:08:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:05.415558 | orchestrator | 2026-01-01 01:08:05 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:05.415583 | orchestrator | 2026-01-01 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:08.479890 | orchestrator | 2026-01-01 01:08:08 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:08.486780 | orchestrator | 2026-01-01 01:08:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:08.488577 | orchestrator | 2026-01-01 01:08:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:08.491104 | orchestrator | 2026-01-01 01:08:08 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:08.491161 | orchestrator | 2026-01-01 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:11.543577 | orchestrator | 2026-01-01 01:08:11 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:11.545581 | orchestrator | 2026-01-01 01:08:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:11.547987 | orchestrator | 2026-01-01 01:08:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:11.550621 | orchestrator | 2026-01-01 01:08:11 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:11.550682 | orchestrator | 2026-01-01 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:14.602737 | orchestrator | 2026-01-01 01:08:14 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:14.604492 | orchestrator | 2026-01-01 01:08:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:14.606159 | orchestrator | 2026-01-01 01:08:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:14.607337 | orchestrator | 2026-01-01 01:08:14 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:14.608004 | orchestrator | 2026-01-01 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:17.660387 | orchestrator | 2026-01-01 01:08:17 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:17.662335 | orchestrator | 2026-01-01 01:08:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:17.664676 | orchestrator | 2026-01-01 01:08:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:17.667028 | orchestrator | 2026-01-01 01:08:17 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:17.667066 | orchestrator | 2026-01-01 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:20.713482 | orchestrator | 2026-01-01 01:08:20 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:20.714252 | orchestrator | 2026-01-01 01:08:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:20.715740 | orchestrator | 2026-01-01 01:08:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:20.718775 | orchestrator | 2026-01-01 01:08:20 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:20.718932 | orchestrator | 2026-01-01 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:23.771094 | orchestrator | 2026-01-01 01:08:23 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:23.774151 | orchestrator | 2026-01-01 01:08:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:23.775507 | orchestrator | 2026-01-01 01:08:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:23.777103 | orchestrator | 2026-01-01 01:08:23 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:23.777148 | orchestrator | 2026-01-01 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:26.820975 | orchestrator | 2026-01-01 01:08:26 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:26.821791 | orchestrator | 2026-01-01 01:08:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:26.824035 | orchestrator | 2026-01-01 01:08:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:26.825596 | orchestrator | 2026-01-01 01:08:26 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:26.825856 | orchestrator | 2026-01-01 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:29.876825 | orchestrator | 2026-01-01 01:08:29 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:29.877002 | orchestrator | 2026-01-01 01:08:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:29.878145 | orchestrator | 2026-01-01 01:08:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:29.879341 | orchestrator | 2026-01-01 01:08:29 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:29.879377 | orchestrator | 2026-01-01 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:32.918136 | orchestrator | 2026-01-01 01:08:32 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:32.918736 | orchestrator | 2026-01-01 01:08:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:32.919921 | orchestrator | 2026-01-01 01:08:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:32.921137 | orchestrator | 2026-01-01 01:08:32 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:32.921402 | orchestrator | 2026-01-01 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:35.961656 | orchestrator | 2026-01-01 01:08:35 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:35.964166 | orchestrator | 2026-01-01 01:08:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:35.967406 | orchestrator | 2026-01-01 01:08:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:35.970553 | orchestrator | 2026-01-01 01:08:35 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:35.970712 | orchestrator | 2026-01-01 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:39.017170 | orchestrator | 2026-01-01 01:08:39 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:39.017722 | orchestrator | 2026-01-01 01:08:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:39.019008 | orchestrator | 2026-01-01 01:08:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:39.020995 | orchestrator | 2026-01-01 01:08:39 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:39.021048 | orchestrator | 2026-01-01 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:42.055422 | orchestrator | 2026-01-01 01:08:42 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:42.057766 | orchestrator | 2026-01-01 01:08:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:42.059588 | orchestrator | 2026-01-01 01:08:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:42.061943 | orchestrator | 2026-01-01 01:08:42 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:42.062187 | orchestrator | 2026-01-01 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:45.116969 | orchestrator | 2026-01-01 01:08:45 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:45.119496 | orchestrator | 2026-01-01 01:08:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:45.121729 | orchestrator | 2026-01-01 01:08:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:45.123777 | orchestrator | 2026-01-01 01:08:45 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:45.123829 | orchestrator | 2026-01-01 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:48.170720 | orchestrator | 2026-01-01 01:08:48 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:48.173303 | orchestrator | 2026-01-01 01:08:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:48.175415 | orchestrator | 2026-01-01 01:08:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:48.176506 | orchestrator | 2026-01-01 01:08:48 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:48.176554 | orchestrator | 2026-01-01 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:51.228572 | orchestrator | 2026-01-01 01:08:51 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:51.230208 | orchestrator | 2026-01-01 01:08:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:51.232753 | orchestrator | 2026-01-01 01:08:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:51.234297 | orchestrator | 2026-01-01 01:08:51 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:51.234323 | orchestrator | 2026-01-01 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:54.282792 | orchestrator | 2026-01-01 01:08:54 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:54.287322 | orchestrator | 2026-01-01 01:08:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:54.289924 | orchestrator | 2026-01-01 01:08:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:54.292499 | orchestrator | 2026-01-01 01:08:54 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:54.292930 | orchestrator | 2026-01-01 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:57.339200 | orchestrator | 2026-01-01 01:08:57 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:08:57.345519 | orchestrator | 2026-01-01 01:08:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:08:57.347301 | orchestrator | 2026-01-01 01:08:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:08:57.350213 | orchestrator | 2026-01-01 01:08:57 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:08:57.350281 | orchestrator | 2026-01-01 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:00.390084 | orchestrator | 2026-01-01 01:09:00 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:00.390843 | orchestrator | 2026-01-01 01:09:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:00.392087 | orchestrator | 2026-01-01 01:09:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:00.393223 | orchestrator | 2026-01-01 01:09:00 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:00.393259 | orchestrator | 2026-01-01 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:03.437843 | orchestrator | 2026-01-01 01:09:03 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:03.438395 | orchestrator | 2026-01-01 01:09:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:03.439493 | orchestrator | 2026-01-01 01:09:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:03.440631 | orchestrator | 2026-01-01 01:09:03 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:03.440678 | orchestrator | 2026-01-01 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:06.481740 | orchestrator | 2026-01-01 01:09:06 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:06.484813 | orchestrator | 2026-01-01 01:09:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:06.488211 | orchestrator | 2026-01-01 01:09:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:06.489749 | orchestrator | 2026-01-01 01:09:06 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:06.489988 | orchestrator | 2026-01-01 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:09.531366 | orchestrator | 2026-01-01 01:09:09 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:09.533276 | orchestrator | 2026-01-01 01:09:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:09.536290 | orchestrator | 2026-01-01 01:09:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:09.537478 | orchestrator | 2026-01-01 01:09:09 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:09.537510 | orchestrator | 2026-01-01 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:12.579597 | orchestrator | 2026-01-01 01:09:12 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:12.581081 | orchestrator | 2026-01-01 01:09:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:12.582773 | orchestrator | 2026-01-01 01:09:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:12.584468 | orchestrator | 2026-01-01 01:09:12 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:12.584494 | orchestrator | 2026-01-01 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:15.633241 | orchestrator | 2026-01-01 01:09:15 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:15.634321 | orchestrator | 2026-01-01 01:09:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:15.635571 | orchestrator | 2026-01-01 01:09:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:15.636852 | orchestrator | 2026-01-01 01:09:15 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:15.636875 | orchestrator | 2026-01-01 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:18.679032 | orchestrator | 2026-01-01 01:09:18 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:18.680966 | orchestrator | 2026-01-01 01:09:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:18.682268 | orchestrator | 2026-01-01 01:09:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:18.684219 | orchestrator | 2026-01-01 01:09:18 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:18.684269 | orchestrator | 2026-01-01 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:21.722315 | orchestrator | 2026-01-01 01:09:21 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:21.722925 | orchestrator | 2026-01-01 01:09:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:21.723905 | orchestrator | 2026-01-01 01:09:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:21.724730 | orchestrator | 2026-01-01 01:09:21 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:21.724753 | orchestrator | 2026-01-01 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:24.760007 | orchestrator | 2026-01-01 01:09:24 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:24.761292 | orchestrator | 2026-01-01 01:09:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:24.762477 | orchestrator | 2026-01-01 01:09:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:24.763945 | orchestrator | 2026-01-01 01:09:24 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:24.763968 | orchestrator | 2026-01-01 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:27.823870 | orchestrator | 2026-01-01 01:09:27 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:27.826487 | orchestrator | 2026-01-01 01:09:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:27.828984 | orchestrator | 2026-01-01 01:09:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:27.831120 | orchestrator | 2026-01-01 01:09:27 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:27.831161 | orchestrator | 2026-01-01 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:30.881877 | orchestrator | 2026-01-01 01:09:30 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:30.883486 | orchestrator | 2026-01-01 01:09:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:30.884974 | orchestrator | 2026-01-01 01:09:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:30.886842 | orchestrator | 2026-01-01 01:09:30 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:30.886915 | orchestrator | 2026-01-01 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:33.932560 | orchestrator | 2026-01-01 01:09:33 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:33.934506 | orchestrator | 2026-01-01 01:09:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:33.935758 | orchestrator | 2026-01-01 01:09:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:33.937137 | orchestrator | 2026-01-01 01:09:33 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:33.938186 | orchestrator | 2026-01-01 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:36.983504 | orchestrator | 2026-01-01 01:09:36 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:36.984839 | orchestrator | 2026-01-01 01:09:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:36.986897 | orchestrator | 2026-01-01 01:09:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:36.988576 | orchestrator | 2026-01-01 01:09:36 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:36.988654 | orchestrator | 2026-01-01 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:40.082494 | orchestrator | 2026-01-01 01:09:40 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:40.105346 | orchestrator | 2026-01-01 01:09:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:40.105430 | orchestrator | 2026-01-01 01:09:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:40.106989 | orchestrator | 2026-01-01 01:09:40 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:40.107010 | orchestrator | 2026-01-01 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:43.149336 | orchestrator | 2026-01-01 01:09:43 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:43.150271 | orchestrator | 2026-01-01 01:09:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:43.151582 | orchestrator | 2026-01-01 01:09:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:43.153091 | orchestrator | 2026-01-01 01:09:43 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:43.153149 | orchestrator | 2026-01-01 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:46.203928 | orchestrator | 2026-01-01 01:09:46 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:46.207287 | orchestrator | 2026-01-01 01:09:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:46.208810 | orchestrator | 2026-01-01 01:09:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:46.210475 | orchestrator | 2026-01-01 01:09:46 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:46.210893 | orchestrator | 2026-01-01 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:49.264536 | orchestrator | 2026-01-01 01:09:49 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:49.266511 | orchestrator | 2026-01-01 01:09:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:49.268557 | orchestrator | 2026-01-01 01:09:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:49.270742 | orchestrator | 2026-01-01 01:09:49 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state STARTED 2026-01-01 01:09:49.270772 | orchestrator | 2026-01-01 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:52.324258 | orchestrator | 2026-01-01 01:09:52 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:52.325046 | orchestrator | 2026-01-01 01:09:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:52.329819 | orchestrator | 2026-01-01 01:09:52 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:09:52.333210 | orchestrator | 2026-01-01 01:09:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:52.339545 | orchestrator | 2026-01-01 01:09:52 | INFO  | Task 1b358fdb-3174-42a3-aa63-f4b37f7a80a5 is in state SUCCESS 2026-01-01 01:09:52.341120 | orchestrator | 2026-01-01 01:09:52.341148 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-01 01:09:52.341154 | orchestrator | 2.16.14 2026-01-01 01:09:52.341161 | orchestrator | 2026-01-01 01:09:52.341172 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-01 01:09:52.341177 | orchestrator | 2026-01-01 01:09:52.341182 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-01 01:09:52.341187 | orchestrator | Thursday 01 January 2026 01:05:48 +0000 (0:00:00.290) 0:00:00.290 ****** 2026-01-01 01:09:52.341208 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.341213 | orchestrator | 2026-01-01 01:09:52.341217 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-01 01:09:52.341221 | orchestrator | Thursday 01 January 2026 01:05:49 +0000 (0:00:01.474) 0:00:01.764 ****** 2026-01-01 01:09:52.341226 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.341229 | orchestrator | 2026-01-01 01:09:52.341234 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-01 01:09:52.341238 | orchestrator | Thursday 01 January 2026 01:05:50 +0000 (0:00:01.098) 0:00:02.862 ****** 2026-01-01 01:09:52.341242 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.341245 | orchestrator | 2026-01-01 01:09:52.341250 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-01 01:09:52.341253 | orchestrator | Thursday 01 January 2026 01:05:51 +0000 (0:00:01.154) 0:00:04.017 ****** 2026-01-01 01:09:52.341257 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.341261 | orchestrator | 2026-01-01 01:09:52.341265 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-01 01:09:52.341269 | orchestrator | Thursday 01 January 2026 01:05:53 +0000 (0:00:01.292) 0:00:05.309 ****** 2026-01-01 01:09:52.341273 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.341277 | orchestrator | 2026-01-01 01:09:52.341281 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-01 01:09:52.341285 | orchestrator | Thursday 01 January 2026 01:05:54 +0000 (0:00:00.997) 0:00:06.307 ****** 2026-01-01 01:09:52.341289 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.341292 | orchestrator | 2026-01-01 01:09:52.341296 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-01 01:09:52.341300 | orchestrator | Thursday 01 January 2026 01:05:55 +0000 (0:00:01.005) 0:00:07.313 ****** 2026-01-01 01:09:52.341304 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.341308 | orchestrator | 2026-01-01 01:09:52.341312 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-01 01:09:52.341316 | orchestrator | Thursday 01 January 2026 01:05:56 +0000 (0:00:01.057) 0:00:08.370 ****** 2026-01-01 01:09:52.341321 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.341325 | orchestrator | 2026-01-01 01:09:52.341328 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-01 01:09:52.341332 | orchestrator | Thursday 01 January 2026 01:05:57 +0000 (0:00:01.112) 0:00:09.482 ****** 2026-01-01 01:09:52.341336 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.341340 | orchestrator | 2026-01-01 01:09:52.341354 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-01 01:09:52.341358 | orchestrator | Thursday 01 January 2026 01:06:57 +0000 (0:01:00.074) 0:01:09.557 ****** 2026-01-01 01:09:52.341362 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:09:52.341366 | orchestrator | 2026-01-01 01:09:52.341370 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-01 01:09:52.341374 | orchestrator | 2026-01-01 01:09:52.341378 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-01 01:09:52.341382 | orchestrator | Thursday 01 January 2026 01:06:57 +0000 (0:00:00.176) 0:01:09.734 ****** 2026-01-01 01:09:52.341386 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:09:52.341390 | orchestrator | 2026-01-01 01:09:52.341394 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-01 01:09:52.341398 | orchestrator | 2026-01-01 01:09:52.341402 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-01 01:09:52.341406 | orchestrator | Thursday 01 January 2026 01:06:59 +0000 (0:00:01.704) 0:01:11.438 ****** 2026-01-01 01:09:52.341410 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:09:52.341414 | orchestrator | 2026-01-01 01:09:52.341418 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-01 01:09:52.341422 | orchestrator | 2026-01-01 01:09:52.341429 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-01 01:09:52.341433 | orchestrator | Thursday 01 January 2026 01:07:10 +0000 (0:00:11.387) 0:01:22.826 ****** 2026-01-01 01:09:52.341437 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:09:52.341441 | orchestrator | 2026-01-01 01:09:52.341445 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:09:52.341450 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:09:52.341455 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:09:52.341460 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:09:52.341464 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:09:52.341468 | orchestrator | 2026-01-01 01:09:52.341472 | orchestrator | 2026-01-01 01:09:52.341476 | orchestrator | 2026-01-01 01:09:52.341480 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:09:52.341484 | orchestrator | Thursday 01 January 2026 01:07:22 +0000 (0:00:11.229) 0:01:34.055 ****** 2026-01-01 01:09:52.341487 | orchestrator | =============================================================================== 2026-01-01 01:09:52.341491 | orchestrator | Create admin user ------------------------------------------------------ 60.07s 2026-01-01 01:09:52.341503 | orchestrator | Restart ceph manager service ------------------------------------------- 24.32s 2026-01-01 01:09:52.341508 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.47s 2026-01-01 01:09:52.341512 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.29s 2026-01-01 01:09:52.341516 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.15s 2026-01-01 01:09:52.341519 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.11s 2026-01-01 01:09:52.341523 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.10s 2026-01-01 01:09:52.341527 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.06s 2026-01-01 01:09:52.341531 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.01s 2026-01-01 01:09:52.341535 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.00s 2026-01-01 01:09:52.341539 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-01-01 01:09:52.341543 | orchestrator | 2026-01-01 01:09:52.341547 | orchestrator | 2026-01-01 01:09:52.341551 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:09:52.341555 | orchestrator | 2026-01-01 01:09:52.341559 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:09:52.341563 | orchestrator | Thursday 01 January 2026 01:07:00 +0000 (0:00:00.295) 0:00:00.295 ****** 2026-01-01 01:09:52.341567 | orchestrator | ok: [testbed-manager] 2026-01-01 01:09:52.341571 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:09:52.341575 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:09:52.341579 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:09:52.341583 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:09:52.341587 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:09:52.341591 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:09:52.341595 | orchestrator | 2026-01-01 01:09:52.341599 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:09:52.341603 | orchestrator | Thursday 01 January 2026 01:07:01 +0000 (0:00:00.846) 0:00:01.141 ****** 2026-01-01 01:09:52.341607 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-01 01:09:52.341611 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-01 01:09:52.341615 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-01 01:09:52.341622 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-01 01:09:52.341626 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-01 01:09:52.341630 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-01 01:09:52.341634 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-01 01:09:52.341638 | orchestrator | 2026-01-01 01:09:52.341642 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-01 01:09:52.341646 | orchestrator | 2026-01-01 01:09:52.341652 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-01 01:09:52.341672 | orchestrator | Thursday 01 January 2026 01:07:02 +0000 (0:00:00.763) 0:00:01.904 ****** 2026-01-01 01:09:52.341677 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:09:52.341683 | orchestrator | 2026-01-01 01:09:52.341687 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-01 01:09:52.341691 | orchestrator | Thursday 01 January 2026 01:07:04 +0000 (0:00:01.626) 0:00:03.531 ****** 2026-01-01 01:09:52.341699 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-01 01:09:52.341708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341735 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341772 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.341781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.341790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341797 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.341804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341820 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:09:52.341825 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.341833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.341840 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.341844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.341848 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.341852 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.341864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.341875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341886 | orchestrator | 2026-01-01 01:09:52.341890 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-01 01:09:52.341894 | orchestrator | Thursday 01 January 2026 01:07:07 +0000 (0:00:02.979) 0:00:06.510 ****** 2026-01-01 01:09:52.341898 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:09:52.341903 | orchestrator | 2026-01-01 01:09:52.341907 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-01 01:09:52.341911 | orchestrator | Thursday 01 January 2026 01:07:08 +0000 (0:00:01.488) 0:00:07.999 ****** 2026-01-01 01:09:52.341915 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-01 01:09:52.341923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341939 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341949 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341957 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.341962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.341969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.341994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.342004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.342010 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.342056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.342065 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.342178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.342183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.342188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.342192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.342199 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:09:52.342204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.342208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.342220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.342224 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.342228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.342233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.342239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.342243 | orchestrator | 2026-01-01 01:09:52.342247 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-01 01:09:52.342251 | orchestrator | Thursday 01 January 2026 01:07:14 +0000 (0:00:06.050) 0:00:14.050 ****** 2026-01-01 01:09:52.342256 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-01 01:09:52.342265 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.342270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.342274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342278 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.342289 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:09:52.342296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342304 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342308 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:09:52.342312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.342316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342343 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.342348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.342355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342363 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.342367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.342376 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.342382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.342405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342410 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.342414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342429 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.342433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342442 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.342446 | orchestrator | 2026-01-01 01:09:52.342450 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-01 01:09:52.342454 | orchestrator | Thursday 01 January 2026 01:07:16 +0000 (0:00:02.266) 0:00:16.316 ****** 2026-01-01 01:09:52.342458 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-01 01:09:52.342466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.342470 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.342475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342479 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.342496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342501 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:09:52.342507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342512 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342516 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:09:52.342520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.342538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.342543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.342547 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.342551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.343486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.343505 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.343511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.343515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.343520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.343531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.343540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.343545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.343549 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.343582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.343587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.343591 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.343596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.343600 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.343604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.343614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.343619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.343623 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.343627 | orchestrator | 2026-01-01 01:09:52.343631 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-01 01:09:52.343636 | orchestrator | Thursday 01 January 2026 01:07:19 +0000 (0:00:02.465) 0:00:18.782 ****** 2026-01-01 01:09:52.343641 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-01 01:09:52.343648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.343653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.343680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.343688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.343692 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.343696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.343700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.343704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.343712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.343716 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.343724 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.343730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.343735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.343739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.343746 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:09:52.343751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.343758 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.343762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.343769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.343774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.343778 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.343782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.343790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.343794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.343803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.343807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.343814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.343818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.343822 | orchestrator | 2026-01-01 01:09:52.343826 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-01 01:09:52.343831 | orchestrator | Thursday 01 January 2026 01:07:25 +0000 (0:00:06.443) 0:00:25.226 ****** 2026-01-01 01:09:52.343835 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 01:09:52.343839 | orchestrator | 2026-01-01 01:09:52.343843 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-01 01:09:52.343847 | orchestrator | Thursday 01 January 2026 01:07:26 +0000 (0:00:01.120) 0:00:26.347 ****** 2026-01-01 01:09:52.343851 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:09:52.343854 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.343858 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.343862 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.343866 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.343870 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.343874 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.343878 | orchestrator | 2026-01-01 01:09:52.343882 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-01 01:09:52.343886 | orchestrator | Thursday 01 January 2026 01:07:27 +0000 (0:00:00.630) 0:00:26.978 ****** 2026-01-01 01:09:52.343890 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 01:09:52.343893 | orchestrator | 2026-01-01 01:09:52.343897 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-01 01:09:52.343901 | orchestrator | Thursday 01 January 2026 01:07:28 +0000 (0:00:00.690) 0:00:27.669 ****** 2026-01-01 01:09:52.343908 | orchestrator | [WARNING]: Skipped 2026-01-01 01:09:52.343913 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.343917 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-01 01:09:52.343921 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.343925 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-01 01:09:52.343929 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 01:09:52.343933 | orchestrator | [WARNING]: Skipped 2026-01-01 01:09:52.343939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.343943 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-01 01:09:52.343947 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.343951 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-01 01:09:52.343955 | orchestrator | [WARNING]: Skipped 2026-01-01 01:09:52.343959 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.343963 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-01 01:09:52.343967 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.343972 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-01 01:09:52.343979 | orchestrator | [WARNING]: Skipped 2026-01-01 01:09:52.343985 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.343992 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-01 01:09:52.343998 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.344004 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-01 01:09:52.344010 | orchestrator | [WARNING]: Skipped 2026-01-01 01:09:52.344016 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.344022 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-01 01:09:52.344029 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.344035 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-01 01:09:52.344041 | orchestrator | [WARNING]: Skipped 2026-01-01 01:09:52.344047 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.344054 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-01 01:09:52.344059 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.344063 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-01 01:09:52.344067 | orchestrator | [WARNING]: Skipped 2026-01-01 01:09:52.344071 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.344075 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-01 01:09:52.344079 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:09:52.344083 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-01 01:09:52.344087 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 01:09:52.344091 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-01 01:09:52.344098 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-01 01:09:52.344103 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-01 01:09:52.344106 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-01 01:09:52.344110 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-01 01:09:52.344115 | orchestrator | 2026-01-01 01:09:52.344120 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-01 01:09:52.344124 | orchestrator | Thursday 01 January 2026 01:07:29 +0000 (0:00:01.465) 0:00:29.134 ****** 2026-01-01 01:09:52.344129 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:09:52.344140 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.344146 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:09:52.344150 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.344155 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:09:52.344160 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.344165 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:09:52.344170 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.344174 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:09:52.344179 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.344184 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:09:52.344189 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.344194 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-01 01:09:52.344199 | orchestrator | 2026-01-01 01:09:52.344203 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-01 01:09:52.344208 | orchestrator | Thursday 01 January 2026 01:07:44 +0000 (0:00:14.465) 0:00:43.600 ****** 2026-01-01 01:09:52.344213 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:09:52.344217 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:09:52.344222 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.344227 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:09:52.344232 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.344237 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.344242 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:09:52.344248 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.344256 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:09:52.344261 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.344266 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:09:52.344271 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.344277 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-01 01:09:52.344282 | orchestrator | 2026-01-01 01:09:52.344287 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-01 01:09:52.344293 | orchestrator | Thursday 01 January 2026 01:07:47 +0000 (0:00:03.064) 0:00:46.665 ****** 2026-01-01 01:09:52.344298 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:09:52.344304 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.344309 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:09:52.344314 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.344320 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:09:52.344325 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.344330 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:09:52.344336 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.344344 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:09:52.344350 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.344355 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-01 01:09:52.344360 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:09:52.344365 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.344371 | orchestrator | 2026-01-01 01:09:52.344376 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-01 01:09:52.344381 | orchestrator | Thursday 01 January 2026 01:07:48 +0000 (0:00:01.327) 0:00:47.992 ****** 2026-01-01 01:09:52.344386 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 01:09:52.344392 | orchestrator | 2026-01-01 01:09:52.344397 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-01 01:09:52.344405 | orchestrator | Thursday 01 January 2026 01:07:49 +0000 (0:00:00.698) 0:00:48.691 ****** 2026-01-01 01:09:52.344410 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:09:52.344415 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.344420 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.344425 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.344431 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.344436 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.344441 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.344446 | orchestrator | 2026-01-01 01:09:52.344452 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-01 01:09:52.344457 | orchestrator | Thursday 01 January 2026 01:07:49 +0000 (0:00:00.601) 0:00:49.293 ****** 2026-01-01 01:09:52.344462 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:09:52.344468 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.344473 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.344477 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.344482 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:09:52.344486 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:09:52.344490 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:09:52.344495 | orchestrator | 2026-01-01 01:09:52.344499 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-01 01:09:52.344504 | orchestrator | Thursday 01 January 2026 01:07:51 +0000 (0:00:02.072) 0:00:51.365 ****** 2026-01-01 01:09:52.344508 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:09:52.344512 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:09:52.344517 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:09:52.344521 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:09:52.344525 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.344530 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.344534 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:09:52.344538 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.344543 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:09:52.344547 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.344551 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:09:52.344556 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.344560 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:09:52.344564 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.344568 | orchestrator | 2026-01-01 01:09:52.344573 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-01 01:09:52.344582 | orchestrator | Thursday 01 January 2026 01:07:53 +0000 (0:00:01.395) 0:00:52.760 ****** 2026-01-01 01:09:52.344819 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:09:52.344830 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.344835 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:09:52.344840 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.344844 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:09:52.344848 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.344853 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:09:52.344857 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.344862 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:09:52.344866 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.344870 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:09:52.344875 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.344879 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-01 01:09:52.344883 | orchestrator | 2026-01-01 01:09:52.344888 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-01 01:09:52.344892 | orchestrator | Thursday 01 January 2026 01:07:54 +0000 (0:00:01.451) 0:00:54.212 ****** 2026-01-01 01:09:52.344897 | orchestrator | [WARNING]: Skipped 2026-01-01 01:09:52.344901 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-01 01:09:52.344905 | orchestrator | due to this access issue: 2026-01-01 01:09:52.344910 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-01 01:09:52.344915 | orchestrator | not a directory 2026-01-01 01:09:52.344919 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 01:09:52.344923 | orchestrator | 2026-01-01 01:09:52.344928 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-01 01:09:52.344932 | orchestrator | Thursday 01 January 2026 01:07:55 +0000 (0:00:01.123) 0:00:55.336 ****** 2026-01-01 01:09:52.344936 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:09:52.344941 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.344945 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.344949 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.344954 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.344958 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.344963 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.344967 | orchestrator | 2026-01-01 01:09:52.344977 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-01 01:09:52.344984 | orchestrator | Thursday 01 January 2026 01:07:56 +0000 (0:00:00.953) 0:00:56.290 ****** 2026-01-01 01:09:52.344992 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:09:52.344998 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.345005 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.345012 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.345019 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.345026 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.345030 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.345034 | orchestrator | 2026-01-01 01:09:52.345039 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-01-01 01:09:52.345043 | orchestrator | Thursday 01 January 2026 01:07:57 +0000 (0:00:00.755) 0:00:57.045 ****** 2026-01-01 01:09:52.345048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.345059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.345069 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-01 01:09:52.345075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.345080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.345087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.345094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.345107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.345114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.345121 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:09:52.345132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.345139 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.345145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.345156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.345165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.345174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.345179 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.345187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.345192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.345196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.345201 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.345208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.345218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.345223 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:09:52.345232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:09:52.345237 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.345241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.345247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.345257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:09:52.345270 | orchestrator | 2026-01-01 01:09:52.345275 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-01-01 01:09:52.345279 | orchestrator | Thursday 01 January 2026 01:08:02 +0000 (0:00:04.829) 0:01:01.874 ****** 2026-01-01 01:09:52.345283 | orchestrator | changed: [testbed-manager] => { 2026-01-01 01:09:52.345288 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:09:52.345292 | orchestrator | } 2026-01-01 01:09:52.345297 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 01:09:52.345301 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:09:52.345306 | orchestrator | } 2026-01-01 01:09:52.345310 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 01:09:52.345314 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:09:52.345319 | orchestrator | } 2026-01-01 01:09:52.345323 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 01:09:52.345328 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:09:52.345332 | orchestrator | } 2026-01-01 01:09:52.345336 | orchestrator | changed: [testbed-node-3] => { 2026-01-01 01:09:52.345341 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:09:52.345345 | orchestrator | } 2026-01-01 01:09:52.345350 | orchestrator | changed: [testbed-node-4] => { 2026-01-01 01:09:52.345354 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:09:52.345358 | orchestrator | } 2026-01-01 01:09:52.345363 | orchestrator | changed: [testbed-node-5] => { 2026-01-01 01:09:52.345367 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:09:52.345372 | orchestrator | } 2026-01-01 01:09:52.345376 | orchestrator | 2026-01-01 01:09:52.345380 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 01:09:52.345385 | orchestrator | Thursday 01 January 2026 01:08:03 +0000 (0:00:00.962) 0:01:02.837 ****** 2026-01-01 01:09:52.345393 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-01 01:09:52.345398 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.345403 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.345414 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:09:52.345419 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.345424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.345429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.345436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.345441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.345449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.345454 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:09:52.345460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.345468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.345474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.345479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.345484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.345490 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:09:52.345497 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:09:52.345502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.345511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.345516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.345524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.345530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:09:52.345535 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:09:52.345540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.345546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.345554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.345559 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:09:52.345565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.345573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.345578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.345583 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:09:52.345590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:09:52.345594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.345599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:09:52.345604 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:09:52.345608 | orchestrator | 2026-01-01 01:09:52.345613 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-01 01:09:52.345617 | orchestrator | Thursday 01 January 2026 01:08:05 +0000 (0:00:01.876) 0:01:04.714 ****** 2026-01-01 01:09:52.345622 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-01 01:09:52.345626 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:09:52.345631 | orchestrator | 2026-01-01 01:09:52.345635 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:09:52.345639 | orchestrator | Thursday 01 January 2026 01:08:06 +0000 (0:00:01.087) 0:01:05.801 ****** 2026-01-01 01:09:52.345644 | orchestrator | 2026-01-01 01:09:52.345651 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:09:52.345673 | orchestrator | Thursday 01 January 2026 01:08:06 +0000 (0:00:00.097) 0:01:05.899 ****** 2026-01-01 01:09:52.345678 | orchestrator | 2026-01-01 01:09:52.345683 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:09:52.345687 | orchestrator | Thursday 01 January 2026 01:08:06 +0000 (0:00:00.090) 0:01:05.989 ****** 2026-01-01 01:09:52.345691 | orchestrator | 2026-01-01 01:09:52.345699 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:09:52.345703 | orchestrator | Thursday 01 January 2026 01:08:06 +0000 (0:00:00.091) 0:01:06.081 ****** 2026-01-01 01:09:52.345707 | orchestrator | 2026-01-01 01:09:52.345712 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:09:52.345716 | orchestrator | Thursday 01 January 2026 01:08:06 +0000 (0:00:00.088) 0:01:06.169 ****** 2026-01-01 01:09:52.345720 | orchestrator | 2026-01-01 01:09:52.345725 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:09:52.345729 | orchestrator | Thursday 01 January 2026 01:08:06 +0000 (0:00:00.087) 0:01:06.257 ****** 2026-01-01 01:09:52.345733 | orchestrator | 2026-01-01 01:09:52.345738 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:09:52.345742 | orchestrator | Thursday 01 January 2026 01:08:07 +0000 (0:00:00.329) 0:01:06.586 ****** 2026-01-01 01:09:52.345747 | orchestrator | 2026-01-01 01:09:52.345751 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-01 01:09:52.345755 | orchestrator | Thursday 01 January 2026 01:08:07 +0000 (0:00:00.092) 0:01:06.679 ****** 2026-01-01 01:09:52.345760 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.345764 | orchestrator | 2026-01-01 01:09:52.345768 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-01 01:09:52.345773 | orchestrator | Thursday 01 January 2026 01:08:29 +0000 (0:00:22.137) 0:01:28.817 ****** 2026-01-01 01:09:52.345777 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:09:52.345781 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:09:52.345786 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:09:52.345790 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.345794 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:09:52.345798 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:09:52.345803 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:09:52.345807 | orchestrator | 2026-01-01 01:09:52.345811 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-01 01:09:52.345816 | orchestrator | Thursday 01 January 2026 01:08:42 +0000 (0:00:13.361) 0:01:42.178 ****** 2026-01-01 01:09:52.345820 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:09:52.345825 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:09:52.345829 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:09:52.345833 | orchestrator | 2026-01-01 01:09:52.345838 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-01 01:09:52.345842 | orchestrator | Thursday 01 January 2026 01:08:48 +0000 (0:00:05.975) 0:01:48.154 ****** 2026-01-01 01:09:52.345846 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:09:52.345851 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:09:52.345855 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:09:52.345859 | orchestrator | 2026-01-01 01:09:52.345863 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-01 01:09:52.345868 | orchestrator | Thursday 01 January 2026 01:08:58 +0000 (0:00:10.017) 0:01:58.172 ****** 2026-01-01 01:09:52.345872 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:09:52.345879 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:09:52.345884 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:09:52.345888 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:09:52.345892 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:09:52.345897 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:09:52.345901 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.345909 | orchestrator | 2026-01-01 01:09:52.345913 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-01 01:09:52.345917 | orchestrator | Thursday 01 January 2026 01:09:13 +0000 (0:00:14.557) 0:02:12.730 ****** 2026-01-01 01:09:52.345924 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.345932 | orchestrator | 2026-01-01 01:09:52.345940 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-01 01:09:52.345945 | orchestrator | Thursday 01 January 2026 01:09:22 +0000 (0:00:08.980) 0:02:21.710 ****** 2026-01-01 01:09:52.345949 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:09:52.345953 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:09:52.345958 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:09:52.345962 | orchestrator | 2026-01-01 01:09:52.345966 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-01 01:09:52.345971 | orchestrator | Thursday 01 January 2026 01:09:27 +0000 (0:00:05.361) 0:02:27.071 ****** 2026-01-01 01:09:52.345979 | orchestrator | changed: [testbed-manager] 2026-01-01 01:09:52.345986 | orchestrator | 2026-01-01 01:09:52.345993 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-01 01:09:52.345999 | orchestrator | Thursday 01 January 2026 01:09:38 +0000 (0:00:11.066) 0:02:38.138 ****** 2026-01-01 01:09:52.346006 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:09:52.346013 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:09:52.346073 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:09:52.346080 | orchestrator | 2026-01-01 01:09:52.346085 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:09:52.346090 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 01:09:52.346096 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-01 01:09:52.346100 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-01 01:09:52.346105 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-01 01:09:52.346109 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-01 01:09:52.346118 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-01 01:09:52.346122 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-01 01:09:52.346127 | orchestrator | 2026-01-01 01:09:52.346131 | orchestrator | 2026-01-01 01:09:52.346136 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:09:52.346140 | orchestrator | Thursday 01 January 2026 01:09:48 +0000 (0:00:10.192) 0:02:48.330 ****** 2026-01-01 01:09:52.346145 | orchestrator | =============================================================================== 2026-01-01 01:09:52.346149 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.14s 2026-01-01 01:09:52.346153 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.56s 2026-01-01 01:09:52.346158 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.47s 2026-01-01 01:09:52.346162 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.36s 2026-01-01 01:09:52.346166 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 11.07s 2026-01-01 01:09:52.346171 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.19s 2026-01-01 01:09:52.346175 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.02s 2026-01-01 01:09:52.346184 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.98s 2026-01-01 01:09:52.346188 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.44s 2026-01-01 01:09:52.346193 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.05s 2026-01-01 01:09:52.346197 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.98s 2026-01-01 01:09:52.346201 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.36s 2026-01-01 01:09:52.346206 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.83s 2026-01-01 01:09:52.346210 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.07s 2026-01-01 01:09:52.346215 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.98s 2026-01-01 01:09:52.346219 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.47s 2026-01-01 01:09:52.346223 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.27s 2026-01-01 01:09:52.346227 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.07s 2026-01-01 01:09:52.346237 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.88s 2026-01-01 01:09:52.346241 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.63s 2026-01-01 01:09:52.346258 | orchestrator | 2026-01-01 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:55.386317 | orchestrator | 2026-01-01 01:09:55 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:55.388268 | orchestrator | 2026-01-01 01:09:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:55.390816 | orchestrator | 2026-01-01 01:09:55 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:09:55.393199 | orchestrator | 2026-01-01 01:09:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:55.393286 | orchestrator | 2026-01-01 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:58.439099 | orchestrator | 2026-01-01 01:09:58 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:09:58.441417 | orchestrator | 2026-01-01 01:09:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:09:58.444065 | orchestrator | 2026-01-01 01:09:58 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:09:58.447018 | orchestrator | 2026-01-01 01:09:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:09:58.447058 | orchestrator | 2026-01-01 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:01.493846 | orchestrator | 2026-01-01 01:10:01 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:01.494396 | orchestrator | 2026-01-01 01:10:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:01.495799 | orchestrator | 2026-01-01 01:10:01 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:01.497029 | orchestrator | 2026-01-01 01:10:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:01.497057 | orchestrator | 2026-01-01 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:04.544060 | orchestrator | 2026-01-01 01:10:04 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:04.547113 | orchestrator | 2026-01-01 01:10:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:04.549500 | orchestrator | 2026-01-01 01:10:04 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:04.551387 | orchestrator | 2026-01-01 01:10:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:04.551457 | orchestrator | 2026-01-01 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:07.598669 | orchestrator | 2026-01-01 01:10:07 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:07.602157 | orchestrator | 2026-01-01 01:10:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:07.605571 | orchestrator | 2026-01-01 01:10:07 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:07.608058 | orchestrator | 2026-01-01 01:10:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:07.608413 | orchestrator | 2026-01-01 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:10.653215 | orchestrator | 2026-01-01 01:10:10 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:10.654582 | orchestrator | 2026-01-01 01:10:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:10.657144 | orchestrator | 2026-01-01 01:10:10 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:10.658936 | orchestrator | 2026-01-01 01:10:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:10.659477 | orchestrator | 2026-01-01 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:13.700374 | orchestrator | 2026-01-01 01:10:13 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:13.703287 | orchestrator | 2026-01-01 01:10:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:13.705894 | orchestrator | 2026-01-01 01:10:13 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:13.708073 | orchestrator | 2026-01-01 01:10:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:13.708136 | orchestrator | 2026-01-01 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:16.751827 | orchestrator | 2026-01-01 01:10:16 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:16.753057 | orchestrator | 2026-01-01 01:10:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:16.754319 | orchestrator | 2026-01-01 01:10:16 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:16.755219 | orchestrator | 2026-01-01 01:10:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:16.755293 | orchestrator | 2026-01-01 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:19.807769 | orchestrator | 2026-01-01 01:10:19 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:19.811029 | orchestrator | 2026-01-01 01:10:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:19.812304 | orchestrator | 2026-01-01 01:10:19 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:19.813689 | orchestrator | 2026-01-01 01:10:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:19.813742 | orchestrator | 2026-01-01 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:22.857383 | orchestrator | 2026-01-01 01:10:22 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:22.860177 | orchestrator | 2026-01-01 01:10:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:22.862961 | orchestrator | 2026-01-01 01:10:22 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:22.865506 | orchestrator | 2026-01-01 01:10:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:22.865541 | orchestrator | 2026-01-01 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:25.915627 | orchestrator | 2026-01-01 01:10:25 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:25.918098 | orchestrator | 2026-01-01 01:10:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:25.920997 | orchestrator | 2026-01-01 01:10:25 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:25.923143 | orchestrator | 2026-01-01 01:10:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:25.923202 | orchestrator | 2026-01-01 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:28.972564 | orchestrator | 2026-01-01 01:10:28 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:28.974918 | orchestrator | 2026-01-01 01:10:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:28.977195 | orchestrator | 2026-01-01 01:10:28 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:28.979011 | orchestrator | 2026-01-01 01:10:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:28.979054 | orchestrator | 2026-01-01 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:32.021430 | orchestrator | 2026-01-01 01:10:32 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:32.022838 | orchestrator | 2026-01-01 01:10:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:32.024775 | orchestrator | 2026-01-01 01:10:32 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:32.026354 | orchestrator | 2026-01-01 01:10:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:32.026418 | orchestrator | 2026-01-01 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:35.078961 | orchestrator | 2026-01-01 01:10:35 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:35.080654 | orchestrator | 2026-01-01 01:10:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:35.083952 | orchestrator | 2026-01-01 01:10:35 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:35.086721 | orchestrator | 2026-01-01 01:10:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:35.086758 | orchestrator | 2026-01-01 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:38.139511 | orchestrator | 2026-01-01 01:10:38 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:38.141742 | orchestrator | 2026-01-01 01:10:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:38.145631 | orchestrator | 2026-01-01 01:10:38 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:38.148431 | orchestrator | 2026-01-01 01:10:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:38.148511 | orchestrator | 2026-01-01 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:41.190247 | orchestrator | 2026-01-01 01:10:41 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:41.191963 | orchestrator | 2026-01-01 01:10:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:41.193564 | orchestrator | 2026-01-01 01:10:41 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:41.196557 | orchestrator | 2026-01-01 01:10:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:41.196820 | orchestrator | 2026-01-01 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:44.266624 | orchestrator | 2026-01-01 01:10:44 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:44.269229 | orchestrator | 2026-01-01 01:10:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:44.271686 | orchestrator | 2026-01-01 01:10:44 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:44.274608 | orchestrator | 2026-01-01 01:10:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:44.275180 | orchestrator | 2026-01-01 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:47.322631 | orchestrator | 2026-01-01 01:10:47 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:47.325265 | orchestrator | 2026-01-01 01:10:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:47.327722 | orchestrator | 2026-01-01 01:10:47 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:47.329531 | orchestrator | 2026-01-01 01:10:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:47.329557 | orchestrator | 2026-01-01 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:50.376851 | orchestrator | 2026-01-01 01:10:50 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:50.378310 | orchestrator | 2026-01-01 01:10:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:50.381624 | orchestrator | 2026-01-01 01:10:50 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:50.383394 | orchestrator | 2026-01-01 01:10:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:50.383486 | orchestrator | 2026-01-01 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:53.434342 | orchestrator | 2026-01-01 01:10:53 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:53.435850 | orchestrator | 2026-01-01 01:10:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:53.440162 | orchestrator | 2026-01-01 01:10:53 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:53.441891 | orchestrator | 2026-01-01 01:10:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:53.441927 | orchestrator | 2026-01-01 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:56.495126 | orchestrator | 2026-01-01 01:10:56 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state STARTED 2026-01-01 01:10:56.497058 | orchestrator | 2026-01-01 01:10:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:56.498210 | orchestrator | 2026-01-01 01:10:56 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:56.499580 | orchestrator | 2026-01-01 01:10:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:56.500040 | orchestrator | 2026-01-01 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:59.551551 | orchestrator | 2026-01-01 01:10:59 | INFO  | Task ff32c0f3-8c05-4844-909d-ad0514ee5e4d is in state SUCCESS 2026-01-01 01:10:59.553431 | orchestrator | 2026-01-01 01:10:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:10:59.556331 | orchestrator | 2026-01-01 01:10:59 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:10:59.558617 | orchestrator | 2026-01-01 01:10:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:10:59.558728 | orchestrator | 2026-01-01 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:02.603334 | orchestrator | 2026-01-01 01:11:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:02.605030 | orchestrator | 2026-01-01 01:11:02 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:11:02.606908 | orchestrator | 2026-01-01 01:11:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:02.606946 | orchestrator | 2026-01-01 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:05.663691 | orchestrator | 2026-01-01 01:11:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:05.665451 | orchestrator | 2026-01-01 01:11:05 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:11:05.668785 | orchestrator | 2026-01-01 01:11:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:05.668836 | orchestrator | 2026-01-01 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:08.714650 | orchestrator | 2026-01-01 01:11:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:08.716820 | orchestrator | 2026-01-01 01:11:08 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:11:08.720230 | orchestrator | 2026-01-01 01:11:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:08.720282 | orchestrator | 2026-01-01 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:11.763609 | orchestrator | 2026-01-01 01:11:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:11.764593 | orchestrator | 2026-01-01 01:11:11 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:11:11.766479 | orchestrator | 2026-01-01 01:11:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:11.766534 | orchestrator | 2026-01-01 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:14.821557 | orchestrator | 2026-01-01 01:11:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:14.823669 | orchestrator | 2026-01-01 01:11:14 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:11:14.825845 | orchestrator | 2026-01-01 01:11:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:14.825892 | orchestrator | 2026-01-01 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:17.876892 | orchestrator | 2026-01-01 01:11:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:17.878475 | orchestrator | 2026-01-01 01:11:17 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:11:17.880346 | orchestrator | 2026-01-01 01:11:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:17.880374 | orchestrator | 2026-01-01 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:20.936377 | orchestrator | 2026-01-01 01:11:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:20.938344 | orchestrator | 2026-01-01 01:11:20 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:11:20.940263 | orchestrator | 2026-01-01 01:11:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:20.940299 | orchestrator | 2026-01-01 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:23.995770 | orchestrator | 2026-01-01 01:11:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:23.997258 | orchestrator | 2026-01-01 01:11:23 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:11:23.998916 | orchestrator | 2026-01-01 01:11:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:23.998949 | orchestrator | 2026-01-01 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:27.047898 | orchestrator | 2026-01-01 01:11:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:27.049187 | orchestrator | 2026-01-01 01:11:27 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:11:27.052944 | orchestrator | 2026-01-01 01:11:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:27.053010 | orchestrator | 2026-01-01 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:30.104561 | orchestrator | 2026-01-01 01:11:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:30.105880 | orchestrator | 2026-01-01 01:11:30 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state STARTED 2026-01-01 01:11:30.107480 | orchestrator | 2026-01-01 01:11:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:30.107519 | orchestrator | 2026-01-01 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:33.159981 | orchestrator | 2026-01-01 01:11:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:33.163111 | orchestrator | 2026-01-01 01:11:33.163225 | orchestrator | 2026-01-01 01:11:33.163240 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-01 01:11:33.163253 | orchestrator | 2026-01-01 01:11:33.163264 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-01 01:11:33.163276 | orchestrator | Thursday 01 January 2026 01:05:15 +0000 (0:00:00.094) 0:00:00.094 ****** 2026-01-01 01:11:33.163287 | orchestrator | changed: [localhost] 2026-01-01 01:11:33.163299 | orchestrator | 2026-01-01 01:11:33.163311 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-01 01:11:33.163322 | orchestrator | Thursday 01 January 2026 01:05:16 +0000 (0:00:00.910) 0:00:01.004 ****** 2026-01-01 01:11:33.163332 | orchestrator | 2026-01-01 01:11:33.163344 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:11:33.163355 | orchestrator | 2026-01-01 01:11:33.163365 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:11:33.163376 | orchestrator | 2026-01-01 01:11:33.163387 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:11:33.163398 | orchestrator | 2026-01-01 01:11:33.163408 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:11:33.163445 | orchestrator | 2026-01-01 01:11:33.163456 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:11:33.163467 | orchestrator | 2026-01-01 01:11:33.163478 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:11:33.163489 | orchestrator | changed: [localhost] 2026-01-01 01:11:33.163501 | orchestrator | 2026-01-01 01:11:33.163512 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-01 01:11:33.163524 | orchestrator | Thursday 01 January 2026 01:10:43 +0000 (0:05:27.212) 0:05:28.217 ****** 2026-01-01 01:11:33.163535 | orchestrator | changed: [localhost] 2026-01-01 01:11:33.163546 | orchestrator | 2026-01-01 01:11:33.163557 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:11:33.163568 | orchestrator | 2026-01-01 01:11:33.163582 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:11:33.163608 | orchestrator | Thursday 01 January 2026 01:10:56 +0000 (0:00:12.934) 0:05:41.151 ****** 2026-01-01 01:11:33.163632 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:11:33.163649 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:11:33.163666 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:11:33.163684 | orchestrator | 2026-01-01 01:11:33.163704 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:11:33.163725 | orchestrator | Thursday 01 January 2026 01:10:56 +0000 (0:00:00.262) 0:05:41.414 ****** 2026-01-01 01:11:33.163744 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-01 01:11:33.163761 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-01 01:11:33.163776 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-01 01:11:33.163790 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-01 01:11:33.163803 | orchestrator | 2026-01-01 01:11:33.163815 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-01 01:11:33.163828 | orchestrator | skipping: no hosts matched 2026-01-01 01:11:33.163841 | orchestrator | 2026-01-01 01:11:33.163854 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:11:33.163867 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:11:33.163883 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:11:33.163896 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:11:33.163909 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:11:33.163923 | orchestrator | 2026-01-01 01:11:33.163935 | orchestrator | 2026-01-01 01:11:33.163949 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:11:33.163963 | orchestrator | Thursday 01 January 2026 01:10:57 +0000 (0:00:00.488) 0:05:41.902 ****** 2026-01-01 01:11:33.163990 | orchestrator | =============================================================================== 2026-01-01 01:11:33.164004 | orchestrator | Download ironic-agent initramfs --------------------------------------- 327.21s 2026-01-01 01:11:33.164017 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.93s 2026-01-01 01:11:33.164030 | orchestrator | Ensure the destination directory exists --------------------------------- 0.91s 2026-01-01 01:11:33.164042 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2026-01-01 01:11:33.164053 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-01-01 01:11:33.164064 | orchestrator | 2026-01-01 01:11:33.164090 | orchestrator | 2026-01-01 01:11:33.164101 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:11:33.164122 | orchestrator | 2026-01-01 01:11:33.164164 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:11:33.164183 | orchestrator | Thursday 01 January 2026 01:09:53 +0000 (0:00:00.196) 0:00:00.196 ****** 2026-01-01 01:11:33.164202 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:11:33.164220 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:11:33.164238 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:11:33.164255 | orchestrator | 2026-01-01 01:11:33.164272 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:11:33.164290 | orchestrator | Thursday 01 January 2026 01:09:53 +0000 (0:00:00.275) 0:00:00.472 ****** 2026-01-01 01:11:33.164310 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-01 01:11:33.164329 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-01 01:11:33.164348 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-01 01:11:33.164366 | orchestrator | 2026-01-01 01:11:33.164405 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-01 01:11:33.164427 | orchestrator | 2026-01-01 01:11:33.164445 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-01 01:11:33.164463 | orchestrator | Thursday 01 January 2026 01:09:54 +0000 (0:00:00.405) 0:00:00.877 ****** 2026-01-01 01:11:33.164474 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:11:33.164485 | orchestrator | 2026-01-01 01:11:33.164496 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-01 01:11:33.164507 | orchestrator | Thursday 01 January 2026 01:09:54 +0000 (0:00:00.526) 0:00:01.404 ****** 2026-01-01 01:11:33.164522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.164537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.164549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.164561 | orchestrator | 2026-01-01 01:11:33.164582 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-01 01:11:33.164601 | orchestrator | Thursday 01 January 2026 01:09:55 +0000 (0:00:00.805) 0:00:02.209 ****** 2026-01-01 01:11:33.164613 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 01:11:33.164624 | orchestrator | 2026-01-01 01:11:33.164635 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-01 01:11:33.164646 | orchestrator | Thursday 01 January 2026 01:09:56 +0000 (0:00:00.810) 0:00:03.020 ****** 2026-01-01 01:11:33.164657 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:11:33.164668 | orchestrator | 2026-01-01 01:11:33.164679 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-01 01:11:33.164690 | orchestrator | Thursday 01 January 2026 01:09:57 +0000 (0:00:00.719) 0:00:03.740 ****** 2026-01-01 01:11:33.164709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.164721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.164733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.164745 | orchestrator | 2026-01-01 01:11:33.164756 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-01 01:11:33.164767 | orchestrator | Thursday 01 January 2026 01:09:58 +0000 (0:00:01.448) 0:00:05.188 ****** 2026-01-01 01:11:33.164778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:11:33.164796 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:11:33.164813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:11:33.164825 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:11:33.164837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:11:33.164848 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:11:33.164859 | orchestrator | 2026-01-01 01:11:33.164899 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-01 01:11:33.164919 | orchestrator | Thursday 01 January 2026 01:09:59 +0000 (0:00:00.445) 0:00:05.634 ****** 2026-01-01 01:11:33.164931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:11:33.164942 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:11:33.164954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:11:33.164965 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:11:33.164976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:11:33.164994 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:11:33.165005 | orchestrator | 2026-01-01 01:11:33.165016 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-01 01:11:33.165027 | orchestrator | Thursday 01 January 2026 01:09:59 +0000 (0:00:00.921) 0:00:06.555 ****** 2026-01-01 01:11:33.165043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.165062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.165074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.165086 | orchestrator | 2026-01-01 01:11:33.165097 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-01 01:11:33.165108 | orchestrator | Thursday 01 January 2026 01:10:01 +0000 (0:00:01.363) 0:00:07.919 ****** 2026-01-01 01:11:33.165119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.165166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.165183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.165195 | orchestrator | 2026-01-01 01:11:33.165206 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-01 01:11:33.165217 | orchestrator | Thursday 01 January 2026 01:10:02 +0000 (0:00:01.401) 0:00:09.320 ****** 2026-01-01 01:11:33.165228 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:11:33.165239 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:11:33.165249 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:11:33.165260 | orchestrator | 2026-01-01 01:11:33.165271 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-01 01:11:33.165282 | orchestrator | Thursday 01 January 2026 01:10:03 +0000 (0:00:00.505) 0:00:09.826 ****** 2026-01-01 01:11:33.165293 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-01 01:11:33.165304 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-01 01:11:33.165315 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-01 01:11:33.165326 | orchestrator | 2026-01-01 01:11:33.165337 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-01 01:11:33.165353 | orchestrator | Thursday 01 January 2026 01:10:04 +0000 (0:00:01.388) 0:00:11.214 ****** 2026-01-01 01:11:33.165365 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-01 01:11:33.165376 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-01 01:11:33.165387 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-01 01:11:33.165398 | orchestrator | 2026-01-01 01:11:33.165408 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-01-01 01:11:33.165419 | orchestrator | Thursday 01 January 2026 01:10:05 +0000 (0:00:01.272) 0:00:12.487 ****** 2026-01-01 01:11:33.165430 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 01:11:33.165441 | orchestrator | 2026-01-01 01:11:33.165452 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-01-01 01:11:33.165463 | orchestrator | Thursday 01 January 2026 01:10:06 +0000 (0:00:00.753) 0:00:13.240 ****** 2026-01-01 01:11:33.165480 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:11:33.165491 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:11:33.165502 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:11:33.165513 | orchestrator | 2026-01-01 01:11:33.165524 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-01 01:11:33.165535 | orchestrator | Thursday 01 January 2026 01:10:07 +0000 (0:00:00.735) 0:00:13.976 ****** 2026-01-01 01:11:33.165546 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:11:33.165557 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:11:33.165568 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:11:33.165578 | orchestrator | 2026-01-01 01:11:33.165589 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-01-01 01:11:33.165600 | orchestrator | Thursday 01 January 2026 01:10:09 +0000 (0:00:01.590) 0:00:15.567 ****** 2026-01-01 01:11:33.165612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.165624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.165640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-01 01:11:33.165651 | orchestrator | 2026-01-01 01:11:33.165663 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-01-01 01:11:33.165673 | orchestrator | Thursday 01 January 2026 01:10:09 +0000 (0:00:00.995) 0:00:16.562 ****** 2026-01-01 01:11:33.165684 | orchestrator | changed: [testbed-node-0] => { 2026-01-01 01:11:33.165695 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:11:33.165706 | orchestrator | } 2026-01-01 01:11:33.165717 | orchestrator | changed: [testbed-node-1] => { 2026-01-01 01:11:33.165728 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:11:33.165739 | orchestrator | } 2026-01-01 01:11:33.165749 | orchestrator | changed: [testbed-node-2] => { 2026-01-01 01:11:33.165760 | orchestrator |  "msg": "Notifying handlers" 2026-01-01 01:11:33.165771 | orchestrator | } 2026-01-01 01:11:33.165788 | orchestrator | 2026-01-01 01:11:33.165803 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-01 01:11:33.165815 | orchestrator | Thursday 01 January 2026 01:10:10 +0000 (0:00:00.342) 0:00:16.905 ****** 2026-01-01 01:11:33.165826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:11:33.165838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:11:33.165850 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:11:33.165861 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:11:33.165872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-01 01:11:33.165883 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:11:33.165894 | orchestrator | 2026-01-01 01:11:33.165905 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-01 01:11:33.165916 | orchestrator | Thursday 01 January 2026 01:10:11 +0000 (0:00:00.710) 0:00:17.616 ****** 2026-01-01 01:11:33.165927 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:11:33.165938 | orchestrator | 2026-01-01 01:11:33.165949 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-01 01:11:33.165960 | orchestrator | Thursday 01 January 2026 01:10:13 +0000 (0:00:02.501) 0:00:20.117 ****** 2026-01-01 01:11:33.165976 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:11:33.165987 | orchestrator | 2026-01-01 01:11:33.165998 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-01 01:11:33.166009 | orchestrator | Thursday 01 January 2026 01:10:15 +0000 (0:00:02.412) 0:00:22.530 ****** 2026-01-01 01:11:33.166071 | orchestrator | 2026-01-01 01:11:33.166083 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-01 01:11:33.166094 | orchestrator | Thursday 01 January 2026 01:10:16 +0000 (0:00:00.070) 0:00:22.600 ****** 2026-01-01 01:11:33.166105 | orchestrator | 2026-01-01 01:11:33.166116 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-01 01:11:33.166206 | orchestrator | Thursday 01 January 2026 01:10:16 +0000 (0:00:00.070) 0:00:22.671 ****** 2026-01-01 01:11:33.166219 | orchestrator | 2026-01-01 01:11:33.166230 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-01 01:11:33.166241 | orchestrator | Thursday 01 January 2026 01:10:16 +0000 (0:00:00.088) 0:00:22.759 ****** 2026-01-01 01:11:33.166252 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:11:33.166263 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:11:33.166274 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:11:33.166285 | orchestrator | 2026-01-01 01:11:33.166295 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-01 01:11:33.166306 | orchestrator | Thursday 01 January 2026 01:10:17 +0000 (0:00:01.783) 0:00:24.543 ****** 2026-01-01 01:11:33.166317 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:11:33.166328 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:11:33.166339 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-01 01:11:33.166358 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-01 01:11:33.166369 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-01 01:11:33.166380 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:11:33.166391 | orchestrator | 2026-01-01 01:11:33.166402 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-01 01:11:33.166413 | orchestrator | Thursday 01 January 2026 01:10:57 +0000 (0:00:39.722) 0:01:04.266 ****** 2026-01-01 01:11:33.166424 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:11:33.166435 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:11:33.166446 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:11:33.166457 | orchestrator | 2026-01-01 01:11:33.166467 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-01 01:11:33.166478 | orchestrator | Thursday 01 January 2026 01:11:26 +0000 (0:00:28.923) 0:01:33.189 ****** 2026-01-01 01:11:33.166489 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:11:33.166500 | orchestrator | 2026-01-01 01:11:33.166511 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-01 01:11:33.166522 | orchestrator | Thursday 01 January 2026 01:11:29 +0000 (0:00:02.473) 0:01:35.663 ****** 2026-01-01 01:11:33.166533 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:11:33.166544 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:11:33.166555 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:11:33.166565 | orchestrator | 2026-01-01 01:11:33.166576 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-01 01:11:33.166587 | orchestrator | Thursday 01 January 2026 01:11:29 +0000 (0:00:00.309) 0:01:35.972 ****** 2026-01-01 01:11:33.166599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-01 01:11:33.166612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-01 01:11:33.166623 | orchestrator | 2026-01-01 01:11:33.166633 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-01 01:11:33.166643 | orchestrator | Thursday 01 January 2026 01:11:31 +0000 (0:00:02.590) 0:01:38.563 ****** 2026-01-01 01:11:33.166653 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:11:33.166663 | orchestrator | 2026-01-01 01:11:33.166672 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:11:33.166682 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 01:11:33.166699 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 01:11:33.166709 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 01:11:33.166719 | orchestrator | 2026-01-01 01:11:33.166728 | orchestrator | 2026-01-01 01:11:33.166738 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:11:33.166747 | orchestrator | Thursday 01 January 2026 01:11:32 +0000 (0:00:00.286) 0:01:38.850 ****** 2026-01-01 01:11:33.166757 | orchestrator | =============================================================================== 2026-01-01 01:11:33.166767 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.72s 2026-01-01 01:11:33.166782 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 28.92s 2026-01-01 01:11:33.166792 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.59s 2026-01-01 01:11:33.166801 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.50s 2026-01-01 01:11:33.166811 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.47s 2026-01-01 01:11:33.166820 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.41s 2026-01-01 01:11:33.166830 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.78s 2026-01-01 01:11:33.166839 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.59s 2026-01-01 01:11:33.166849 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.45s 2026-01-01 01:11:33.166859 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.40s 2026-01-01 01:11:33.166868 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.39s 2026-01-01 01:11:33.166878 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.36s 2026-01-01 01:11:33.166887 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.27s 2026-01-01 01:11:33.166897 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.00s 2026-01-01 01:11:33.166906 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.92s 2026-01-01 01:11:33.166916 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.81s 2026-01-01 01:11:33.166931 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.81s 2026-01-01 01:11:33.166941 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.75s 2026-01-01 01:11:33.166951 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.74s 2026-01-01 01:11:33.166960 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.72s 2026-01-01 01:11:33.166970 | orchestrator | 2026-01-01 01:11:33 | INFO  | Task c4f82893-231e-450a-952b-e30c3bc796c8 is in state SUCCESS 2026-01-01 01:11:33.166980 | orchestrator | 2026-01-01 01:11:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:33.166990 | orchestrator | 2026-01-01 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:36.220658 | orchestrator | 2026-01-01 01:11:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:36.223401 | orchestrator | 2026-01-01 01:11:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:36.223456 | orchestrator | 2026-01-01 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:39.269737 | orchestrator | 2026-01-01 01:11:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:39.270821 | orchestrator | 2026-01-01 01:11:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:39.270854 | orchestrator | 2026-01-01 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:42.324213 | orchestrator | 2026-01-01 01:11:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:42.324829 | orchestrator | 2026-01-01 01:11:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:42.324856 | orchestrator | 2026-01-01 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:45.375655 | orchestrator | 2026-01-01 01:11:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:45.377602 | orchestrator | 2026-01-01 01:11:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:45.377710 | orchestrator | 2026-01-01 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:48.419710 | orchestrator | 2026-01-01 01:11:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:48.420364 | orchestrator | 2026-01-01 01:11:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:48.420423 | orchestrator | 2026-01-01 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:51.466767 | orchestrator | 2026-01-01 01:11:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:51.467981 | orchestrator | 2026-01-01 01:11:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:51.468036 | orchestrator | 2026-01-01 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:54.517037 | orchestrator | 2026-01-01 01:11:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:54.518710 | orchestrator | 2026-01-01 01:11:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:54.518792 | orchestrator | 2026-01-01 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:57.562914 | orchestrator | 2026-01-01 01:11:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:11:57.566514 | orchestrator | 2026-01-01 01:11:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:11:57.566621 | orchestrator | 2026-01-01 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:00.609139 | orchestrator | 2026-01-01 01:12:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:00.610609 | orchestrator | 2026-01-01 01:12:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:00.610656 | orchestrator | 2026-01-01 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:03.660655 | orchestrator | 2026-01-01 01:12:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:03.663043 | orchestrator | 2026-01-01 01:12:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:03.663505 | orchestrator | 2026-01-01 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:06.713953 | orchestrator | 2026-01-01 01:12:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:06.715293 | orchestrator | 2026-01-01 01:12:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:06.715316 | orchestrator | 2026-01-01 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:09.756959 | orchestrator | 2026-01-01 01:12:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:09.758918 | orchestrator | 2026-01-01 01:12:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:09.759016 | orchestrator | 2026-01-01 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:12.809117 | orchestrator | 2026-01-01 01:12:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:12.810731 | orchestrator | 2026-01-01 01:12:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:12.847489 | orchestrator | 2026-01-01 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:15.861570 | orchestrator | 2026-01-01 01:12:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:15.864350 | orchestrator | 2026-01-01 01:12:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:15.864404 | orchestrator | 2026-01-01 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:18.915897 | orchestrator | 2026-01-01 01:12:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:18.917832 | orchestrator | 2026-01-01 01:12:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:18.918128 | orchestrator | 2026-01-01 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:21.966907 | orchestrator | 2026-01-01 01:12:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:21.968649 | orchestrator | 2026-01-01 01:12:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:21.968686 | orchestrator | 2026-01-01 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:25.016296 | orchestrator | 2026-01-01 01:12:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:25.018154 | orchestrator | 2026-01-01 01:12:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:25.018288 | orchestrator | 2026-01-01 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:28.066690 | orchestrator | 2026-01-01 01:12:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:28.067950 | orchestrator | 2026-01-01 01:12:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:28.068045 | orchestrator | 2026-01-01 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:31.114630 | orchestrator | 2026-01-01 01:12:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:31.118108 | orchestrator | 2026-01-01 01:12:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:31.118212 | orchestrator | 2026-01-01 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:34.166752 | orchestrator | 2026-01-01 01:12:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:34.168506 | orchestrator | 2026-01-01 01:12:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:34.168623 | orchestrator | 2026-01-01 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:37.214102 | orchestrator | 2026-01-01 01:12:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:37.215568 | orchestrator | 2026-01-01 01:12:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:37.215590 | orchestrator | 2026-01-01 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:40.256805 | orchestrator | 2026-01-01 01:12:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:40.258903 | orchestrator | 2026-01-01 01:12:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:40.258944 | orchestrator | 2026-01-01 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:43.302917 | orchestrator | 2026-01-01 01:12:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:43.304795 | orchestrator | 2026-01-01 01:12:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:43.304925 | orchestrator | 2026-01-01 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:46.359666 | orchestrator | 2026-01-01 01:12:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:46.361972 | orchestrator | 2026-01-01 01:12:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:46.362091 | orchestrator | 2026-01-01 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:49.408883 | orchestrator | 2026-01-01 01:12:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:49.409679 | orchestrator | 2026-01-01 01:12:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:49.409718 | orchestrator | 2026-01-01 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:52.455897 | orchestrator | 2026-01-01 01:12:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:52.458145 | orchestrator | 2026-01-01 01:12:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:52.458215 | orchestrator | 2026-01-01 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:55.502973 | orchestrator | 2026-01-01 01:12:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:55.504193 | orchestrator | 2026-01-01 01:12:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:55.504222 | orchestrator | 2026-01-01 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:58.555796 | orchestrator | 2026-01-01 01:12:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:12:58.559625 | orchestrator | 2026-01-01 01:12:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:12:58.559681 | orchestrator | 2026-01-01 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:01.609056 | orchestrator | 2026-01-01 01:13:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:01.610650 | orchestrator | 2026-01-01 01:13:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:01.610714 | orchestrator | 2026-01-01 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:04.663308 | orchestrator | 2026-01-01 01:13:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:04.666895 | orchestrator | 2026-01-01 01:13:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:04.666993 | orchestrator | 2026-01-01 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:07.711202 | orchestrator | 2026-01-01 01:13:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:07.712165 | orchestrator | 2026-01-01 01:13:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:07.712309 | orchestrator | 2026-01-01 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:10.757708 | orchestrator | 2026-01-01 01:13:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:10.759776 | orchestrator | 2026-01-01 01:13:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:10.759876 | orchestrator | 2026-01-01 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:13.810868 | orchestrator | 2026-01-01 01:13:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:13.812618 | orchestrator | 2026-01-01 01:13:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:13.812653 | orchestrator | 2026-01-01 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:16.854629 | orchestrator | 2026-01-01 01:13:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:16.855613 | orchestrator | 2026-01-01 01:13:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:16.855643 | orchestrator | 2026-01-01 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:19.900358 | orchestrator | 2026-01-01 01:13:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:19.904767 | orchestrator | 2026-01-01 01:13:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:19.905087 | orchestrator | 2026-01-01 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:22.948783 | orchestrator | 2026-01-01 01:13:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:22.950150 | orchestrator | 2026-01-01 01:13:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:22.950225 | orchestrator | 2026-01-01 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:25.999979 | orchestrator | 2026-01-01 01:13:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:26.002001 | orchestrator | 2026-01-01 01:13:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:26.002080 | orchestrator | 2026-01-01 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:29.057994 | orchestrator | 2026-01-01 01:13:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:29.059825 | orchestrator | 2026-01-01 01:13:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:29.059881 | orchestrator | 2026-01-01 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:32.108020 | orchestrator | 2026-01-01 01:13:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:32.110992 | orchestrator | 2026-01-01 01:13:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:32.111042 | orchestrator | 2026-01-01 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:35.147827 | orchestrator | 2026-01-01 01:13:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:35.148020 | orchestrator | 2026-01-01 01:13:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:35.148044 | orchestrator | 2026-01-01 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:38.201462 | orchestrator | 2026-01-01 01:13:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:38.203187 | orchestrator | 2026-01-01 01:13:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:38.203217 | orchestrator | 2026-01-01 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:41.251237 | orchestrator | 2026-01-01 01:13:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:41.251877 | orchestrator | 2026-01-01 01:13:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:41.252147 | orchestrator | 2026-01-01 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:44.297853 | orchestrator | 2026-01-01 01:13:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:44.299862 | orchestrator | 2026-01-01 01:13:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:44.299937 | orchestrator | 2026-01-01 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:47.349311 | orchestrator | 2026-01-01 01:13:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:47.351027 | orchestrator | 2026-01-01 01:13:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:47.351088 | orchestrator | 2026-01-01 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:50.399472 | orchestrator | 2026-01-01 01:13:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:50.400697 | orchestrator | 2026-01-01 01:13:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:50.400725 | orchestrator | 2026-01-01 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:53.444544 | orchestrator | 2026-01-01 01:13:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:53.447763 | orchestrator | 2026-01-01 01:13:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:53.447909 | orchestrator | 2026-01-01 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:56.497632 | orchestrator | 2026-01-01 01:13:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:56.499250 | orchestrator | 2026-01-01 01:13:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:56.499285 | orchestrator | 2026-01-01 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:59.549485 | orchestrator | 2026-01-01 01:13:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:13:59.553978 | orchestrator | 2026-01-01 01:13:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:13:59.554097 | orchestrator | 2026-01-01 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:02.600401 | orchestrator | 2026-01-01 01:14:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:02.602385 | orchestrator | 2026-01-01 01:14:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:02.602424 | orchestrator | 2026-01-01 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:05.646405 | orchestrator | 2026-01-01 01:14:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:05.648127 | orchestrator | 2026-01-01 01:14:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:05.648181 | orchestrator | 2026-01-01 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:08.694825 | orchestrator | 2026-01-01 01:14:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:08.696054 | orchestrator | 2026-01-01 01:14:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:08.696231 | orchestrator | 2026-01-01 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:11.745946 | orchestrator | 2026-01-01 01:14:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:11.748209 | orchestrator | 2026-01-01 01:14:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:11.748394 | orchestrator | 2026-01-01 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:14.796539 | orchestrator | 2026-01-01 01:14:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:14.798363 | orchestrator | 2026-01-01 01:14:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:14.798563 | orchestrator | 2026-01-01 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:17.845492 | orchestrator | 2026-01-01 01:14:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:17.847314 | orchestrator | 2026-01-01 01:14:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:17.847355 | orchestrator | 2026-01-01 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:20.901283 | orchestrator | 2026-01-01 01:14:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:20.903366 | orchestrator | 2026-01-01 01:14:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:20.903408 | orchestrator | 2026-01-01 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:23.949640 | orchestrator | 2026-01-01 01:14:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:23.952494 | orchestrator | 2026-01-01 01:14:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:23.952563 | orchestrator | 2026-01-01 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:26.999683 | orchestrator | 2026-01-01 01:14:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:27.001198 | orchestrator | 2026-01-01 01:14:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:27.001239 | orchestrator | 2026-01-01 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:30.049471 | orchestrator | 2026-01-01 01:14:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:30.050291 | orchestrator | 2026-01-01 01:14:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:30.050385 | orchestrator | 2026-01-01 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:33.095194 | orchestrator | 2026-01-01 01:14:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:33.097184 | orchestrator | 2026-01-01 01:14:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:33.097238 | orchestrator | 2026-01-01 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:36.144905 | orchestrator | 2026-01-01 01:14:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:36.146321 | orchestrator | 2026-01-01 01:14:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:36.146394 | orchestrator | 2026-01-01 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:39.196868 | orchestrator | 2026-01-01 01:14:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:39.198326 | orchestrator | 2026-01-01 01:14:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:39.198581 | orchestrator | 2026-01-01 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:42.246562 | orchestrator | 2026-01-01 01:14:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:42.247789 | orchestrator | 2026-01-01 01:14:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:42.247888 | orchestrator | 2026-01-01 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:45.301674 | orchestrator | 2026-01-01 01:14:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:45.303960 | orchestrator | 2026-01-01 01:14:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:45.304023 | orchestrator | 2026-01-01 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:48.351308 | orchestrator | 2026-01-01 01:14:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:48.352709 | orchestrator | 2026-01-01 01:14:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:48.352747 | orchestrator | 2026-01-01 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:51.403468 | orchestrator | 2026-01-01 01:14:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:51.405643 | orchestrator | 2026-01-01 01:14:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:51.405975 | orchestrator | 2026-01-01 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:54.450495 | orchestrator | 2026-01-01 01:14:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:54.451686 | orchestrator | 2026-01-01 01:14:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:54.451718 | orchestrator | 2026-01-01 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:57.507118 | orchestrator | 2026-01-01 01:14:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:14:57.509662 | orchestrator | 2026-01-01 01:14:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:14:57.509769 | orchestrator | 2026-01-01 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:00.557258 | orchestrator | 2026-01-01 01:15:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:00.559063 | orchestrator | 2026-01-01 01:15:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:00.559127 | orchestrator | 2026-01-01 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:03.609956 | orchestrator | 2026-01-01 01:15:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:03.611533 | orchestrator | 2026-01-01 01:15:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:03.611626 | orchestrator | 2026-01-01 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:06.657699 | orchestrator | 2026-01-01 01:15:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:06.660225 | orchestrator | 2026-01-01 01:15:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:06.660294 | orchestrator | 2026-01-01 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:09.707700 | orchestrator | 2026-01-01 01:15:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:09.712181 | orchestrator | 2026-01-01 01:15:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:09.712225 | orchestrator | 2026-01-01 01:15:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:12.756388 | orchestrator | 2026-01-01 01:15:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:12.758927 | orchestrator | 2026-01-01 01:15:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:12.758961 | orchestrator | 2026-01-01 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:15.809276 | orchestrator | 2026-01-01 01:15:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:15.810940 | orchestrator | 2026-01-01 01:15:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:15.810967 | orchestrator | 2026-01-01 01:15:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:18.860881 | orchestrator | 2026-01-01 01:15:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:18.862086 | orchestrator | 2026-01-01 01:15:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:18.862140 | orchestrator | 2026-01-01 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:21.915628 | orchestrator | 2026-01-01 01:15:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:21.917205 | orchestrator | 2026-01-01 01:15:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:21.917270 | orchestrator | 2026-01-01 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:24.963218 | orchestrator | 2026-01-01 01:15:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:24.963467 | orchestrator | 2026-01-01 01:15:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:24.963493 | orchestrator | 2026-01-01 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:28.016471 | orchestrator | 2026-01-01 01:15:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:28.018077 | orchestrator | 2026-01-01 01:15:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:28.018121 | orchestrator | 2026-01-01 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:31.064825 | orchestrator | 2026-01-01 01:15:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:31.065770 | orchestrator | 2026-01-01 01:15:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:31.065833 | orchestrator | 2026-01-01 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:34.103590 | orchestrator | 2026-01-01 01:15:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:34.105475 | orchestrator | 2026-01-01 01:15:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:34.105509 | orchestrator | 2026-01-01 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:37.156831 | orchestrator | 2026-01-01 01:15:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:37.158508 | orchestrator | 2026-01-01 01:15:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:37.158568 | orchestrator | 2026-01-01 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:40.204263 | orchestrator | 2026-01-01 01:15:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:40.205217 | orchestrator | 2026-01-01 01:15:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:40.205250 | orchestrator | 2026-01-01 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:43.248354 | orchestrator | 2026-01-01 01:15:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:43.253216 | orchestrator | 2026-01-01 01:15:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:43.253335 | orchestrator | 2026-01-01 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:46.308385 | orchestrator | 2026-01-01 01:15:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:46.310325 | orchestrator | 2026-01-01 01:15:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:46.311041 | orchestrator | 2026-01-01 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:49.363666 | orchestrator | 2026-01-01 01:15:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:49.368354 | orchestrator | 2026-01-01 01:15:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:49.368438 | orchestrator | 2026-01-01 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:52.427342 | orchestrator | 2026-01-01 01:15:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:52.430628 | orchestrator | 2026-01-01 01:15:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:52.430725 | orchestrator | 2026-01-01 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:55.481129 | orchestrator | 2026-01-01 01:15:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:55.482745 | orchestrator | 2026-01-01 01:15:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:55.483898 | orchestrator | 2026-01-01 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:58.531732 | orchestrator | 2026-01-01 01:15:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:15:58.533519 | orchestrator | 2026-01-01 01:15:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:15:58.533561 | orchestrator | 2026-01-01 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:01.587509 | orchestrator | 2026-01-01 01:16:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:01.589423 | orchestrator | 2026-01-01 01:16:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:01.589606 | orchestrator | 2026-01-01 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:04.642987 | orchestrator | 2026-01-01 01:16:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:04.644640 | orchestrator | 2026-01-01 01:16:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:04.644912 | orchestrator | 2026-01-01 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:07.689746 | orchestrator | 2026-01-01 01:16:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:07.692773 | orchestrator | 2026-01-01 01:16:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:07.692851 | orchestrator | 2026-01-01 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:10.733140 | orchestrator | 2026-01-01 01:16:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:10.735287 | orchestrator | 2026-01-01 01:16:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:10.735329 | orchestrator | 2026-01-01 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:13.783123 | orchestrator | 2026-01-01 01:16:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:13.787122 | orchestrator | 2026-01-01 01:16:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:13.787179 | orchestrator | 2026-01-01 01:16:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:16.840668 | orchestrator | 2026-01-01 01:16:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:16.842380 | orchestrator | 2026-01-01 01:16:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:16.842439 | orchestrator | 2026-01-01 01:16:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:19.890300 | orchestrator | 2026-01-01 01:16:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:19.891681 | orchestrator | 2026-01-01 01:16:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:19.891754 | orchestrator | 2026-01-01 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:22.944771 | orchestrator | 2026-01-01 01:16:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:22.946974 | orchestrator | 2026-01-01 01:16:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:22.947097 | orchestrator | 2026-01-01 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:25.996858 | orchestrator | 2026-01-01 01:16:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:25.999484 | orchestrator | 2026-01-01 01:16:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:25.999897 | orchestrator | 2026-01-01 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:29.047030 | orchestrator | 2026-01-01 01:16:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:29.047967 | orchestrator | 2026-01-01 01:16:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:29.048008 | orchestrator | 2026-01-01 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:32.091616 | orchestrator | 2026-01-01 01:16:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:32.093039 | orchestrator | 2026-01-01 01:16:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:32.093149 | orchestrator | 2026-01-01 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:35.143566 | orchestrator | 2026-01-01 01:16:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:35.146338 | orchestrator | 2026-01-01 01:16:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:35.146380 | orchestrator | 2026-01-01 01:16:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:38.202346 | orchestrator | 2026-01-01 01:16:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:38.204780 | orchestrator | 2026-01-01 01:16:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:38.204824 | orchestrator | 2026-01-01 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:41.258881 | orchestrator | 2026-01-01 01:16:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:41.262265 | orchestrator | 2026-01-01 01:16:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:41.262344 | orchestrator | 2026-01-01 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:44.308862 | orchestrator | 2026-01-01 01:16:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:44.309901 | orchestrator | 2026-01-01 01:16:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:44.309948 | orchestrator | 2026-01-01 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:47.367746 | orchestrator | 2026-01-01 01:16:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:47.370266 | orchestrator | 2026-01-01 01:16:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:47.370354 | orchestrator | 2026-01-01 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:50.416851 | orchestrator | 2026-01-01 01:16:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:50.418448 | orchestrator | 2026-01-01 01:16:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:50.418530 | orchestrator | 2026-01-01 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:53.467738 | orchestrator | 2026-01-01 01:16:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:53.469561 | orchestrator | 2026-01-01 01:16:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:53.469589 | orchestrator | 2026-01-01 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:56.517478 | orchestrator | 2026-01-01 01:16:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:56.518183 | orchestrator | 2026-01-01 01:16:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:56.518284 | orchestrator | 2026-01-01 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:59.567530 | orchestrator | 2026-01-01 01:16:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:16:59.568850 | orchestrator | 2026-01-01 01:16:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:16:59.568953 | orchestrator | 2026-01-01 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:02.616882 | orchestrator | 2026-01-01 01:17:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:02.618508 | orchestrator | 2026-01-01 01:17:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:02.618862 | orchestrator | 2026-01-01 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:05.664129 | orchestrator | 2026-01-01 01:17:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:05.666301 | orchestrator | 2026-01-01 01:17:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:05.666397 | orchestrator | 2026-01-01 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:08.719010 | orchestrator | 2026-01-01 01:17:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:08.721384 | orchestrator | 2026-01-01 01:17:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:08.721793 | orchestrator | 2026-01-01 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:11.768895 | orchestrator | 2026-01-01 01:17:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:11.770887 | orchestrator | 2026-01-01 01:17:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:11.770922 | orchestrator | 2026-01-01 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:14.819486 | orchestrator | 2026-01-01 01:17:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:14.820843 | orchestrator | 2026-01-01 01:17:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:14.820899 | orchestrator | 2026-01-01 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:17.871068 | orchestrator | 2026-01-01 01:17:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:17.873260 | orchestrator | 2026-01-01 01:17:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:17.873306 | orchestrator | 2026-01-01 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:20.916707 | orchestrator | 2026-01-01 01:17:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:20.917656 | orchestrator | 2026-01-01 01:17:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:20.917807 | orchestrator | 2026-01-01 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:23.964562 | orchestrator | 2026-01-01 01:17:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:23.965973 | orchestrator | 2026-01-01 01:17:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:23.966012 | orchestrator | 2026-01-01 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:27.005343 | orchestrator | 2026-01-01 01:17:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:27.006979 | orchestrator | 2026-01-01 01:17:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:27.007024 | orchestrator | 2026-01-01 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:30.057818 | orchestrator | 2026-01-01 01:17:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:30.060873 | orchestrator | 2026-01-01 01:17:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:30.061578 | orchestrator | 2026-01-01 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:33.106670 | orchestrator | 2026-01-01 01:17:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:33.107058 | orchestrator | 2026-01-01 01:17:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:33.107119 | orchestrator | 2026-01-01 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:36.155612 | orchestrator | 2026-01-01 01:17:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:36.159724 | orchestrator | 2026-01-01 01:17:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:36.159809 | orchestrator | 2026-01-01 01:17:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:39.218920 | orchestrator | 2026-01-01 01:17:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:39.221899 | orchestrator | 2026-01-01 01:17:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:39.221933 | orchestrator | 2026-01-01 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:42.271530 | orchestrator | 2026-01-01 01:17:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:42.273309 | orchestrator | 2026-01-01 01:17:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:42.273402 | orchestrator | 2026-01-01 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:45.318884 | orchestrator | 2026-01-01 01:17:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:45.321643 | orchestrator | 2026-01-01 01:17:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:45.321694 | orchestrator | 2026-01-01 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:48.375468 | orchestrator | 2026-01-01 01:17:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:48.377067 | orchestrator | 2026-01-01 01:17:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:48.377207 | orchestrator | 2026-01-01 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:51.427975 | orchestrator | 2026-01-01 01:17:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:51.430364 | orchestrator | 2026-01-01 01:17:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:51.430458 | orchestrator | 2026-01-01 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:54.470895 | orchestrator | 2026-01-01 01:17:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:54.473305 | orchestrator | 2026-01-01 01:17:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:54.473351 | orchestrator | 2026-01-01 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:57.521800 | orchestrator | 2026-01-01 01:17:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:17:57.523720 | orchestrator | 2026-01-01 01:17:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:17:57.525088 | orchestrator | 2026-01-01 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:00.567977 | orchestrator | 2026-01-01 01:18:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:00.569056 | orchestrator | 2026-01-01 01:18:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:00.569098 | orchestrator | 2026-01-01 01:18:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:03.613222 | orchestrator | 2026-01-01 01:18:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:03.614700 | orchestrator | 2026-01-01 01:18:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:03.614736 | orchestrator | 2026-01-01 01:18:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:06.659974 | orchestrator | 2026-01-01 01:18:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:06.662195 | orchestrator | 2026-01-01 01:18:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:06.662350 | orchestrator | 2026-01-01 01:18:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:09.721548 | orchestrator | 2026-01-01 01:18:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:09.724184 | orchestrator | 2026-01-01 01:18:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:09.724541 | orchestrator | 2026-01-01 01:18:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:12.762689 | orchestrator | 2026-01-01 01:18:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:12.763841 | orchestrator | 2026-01-01 01:18:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:12.763872 | orchestrator | 2026-01-01 01:18:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:15.814370 | orchestrator | 2026-01-01 01:18:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:15.816331 | orchestrator | 2026-01-01 01:18:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:15.816379 | orchestrator | 2026-01-01 01:18:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:18.868116 | orchestrator | 2026-01-01 01:18:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:18.869402 | orchestrator | 2026-01-01 01:18:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:18.869422 | orchestrator | 2026-01-01 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:21.923031 | orchestrator | 2026-01-01 01:18:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:21.924915 | orchestrator | 2026-01-01 01:18:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:21.924975 | orchestrator | 2026-01-01 01:18:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:24.977208 | orchestrator | 2026-01-01 01:18:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:24.978629 | orchestrator | 2026-01-01 01:18:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:24.978708 | orchestrator | 2026-01-01 01:18:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:28.024979 | orchestrator | 2026-01-01 01:18:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:28.028718 | orchestrator | 2026-01-01 01:18:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:28.028770 | orchestrator | 2026-01-01 01:18:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:31.083421 | orchestrator | 2026-01-01 01:18:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:31.085929 | orchestrator | 2026-01-01 01:18:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:31.086011 | orchestrator | 2026-01-01 01:18:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:34.128126 | orchestrator | 2026-01-01 01:18:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:34.129743 | orchestrator | 2026-01-01 01:18:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:34.129777 | orchestrator | 2026-01-01 01:18:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:37.170013 | orchestrator | 2026-01-01 01:18:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:37.173019 | orchestrator | 2026-01-01 01:18:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:37.173094 | orchestrator | 2026-01-01 01:18:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:40.218335 | orchestrator | 2026-01-01 01:18:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:40.220206 | orchestrator | 2026-01-01 01:18:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:40.221194 | orchestrator | 2026-01-01 01:18:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:43.275579 | orchestrator | 2026-01-01 01:18:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:43.276923 | orchestrator | 2026-01-01 01:18:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:43.276990 | orchestrator | 2026-01-01 01:18:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:46.324353 | orchestrator | 2026-01-01 01:18:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:46.326122 | orchestrator | 2026-01-01 01:18:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:46.326356 | orchestrator | 2026-01-01 01:18:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:49.385013 | orchestrator | 2026-01-01 01:18:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:49.387986 | orchestrator | 2026-01-01 01:18:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:49.388305 | orchestrator | 2026-01-01 01:18:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:52.444110 | orchestrator | 2026-01-01 01:18:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:52.445499 | orchestrator | 2026-01-01 01:18:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:52.445534 | orchestrator | 2026-01-01 01:18:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:55.502491 | orchestrator | 2026-01-01 01:18:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:55.504417 | orchestrator | 2026-01-01 01:18:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:55.504788 | orchestrator | 2026-01-01 01:18:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:58.559372 | orchestrator | 2026-01-01 01:18:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:18:58.561135 | orchestrator | 2026-01-01 01:18:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:18:58.561182 | orchestrator | 2026-01-01 01:18:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:01.613104 | orchestrator | 2026-01-01 01:19:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:01.614390 | orchestrator | 2026-01-01 01:19:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:01.614558 | orchestrator | 2026-01-01 01:19:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:04.659520 | orchestrator | 2026-01-01 01:19:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:04.661159 | orchestrator | 2026-01-01 01:19:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:04.661200 | orchestrator | 2026-01-01 01:19:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:07.714800 | orchestrator | 2026-01-01 01:19:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:07.715532 | orchestrator | 2026-01-01 01:19:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:07.716009 | orchestrator | 2026-01-01 01:19:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:10.772093 | orchestrator | 2026-01-01 01:19:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:10.773344 | orchestrator | 2026-01-01 01:19:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:10.774060 | orchestrator | 2026-01-01 01:19:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:13.815540 | orchestrator | 2026-01-01 01:19:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:13.817498 | orchestrator | 2026-01-01 01:19:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:13.817540 | orchestrator | 2026-01-01 01:19:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:16.869256 | orchestrator | 2026-01-01 01:19:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:16.871806 | orchestrator | 2026-01-01 01:19:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:16.871864 | orchestrator | 2026-01-01 01:19:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:19.924336 | orchestrator | 2026-01-01 01:19:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:19.926479 | orchestrator | 2026-01-01 01:19:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:19.926517 | orchestrator | 2026-01-01 01:19:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:22.980861 | orchestrator | 2026-01-01 01:19:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:22.983579 | orchestrator | 2026-01-01 01:19:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:22.983615 | orchestrator | 2026-01-01 01:19:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:26.030686 | orchestrator | 2026-01-01 01:19:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:26.032737 | orchestrator | 2026-01-01 01:19:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:26.032833 | orchestrator | 2026-01-01 01:19:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:29.083134 | orchestrator | 2026-01-01 01:19:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:29.084723 | orchestrator | 2026-01-01 01:19:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:29.084748 | orchestrator | 2026-01-01 01:19:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:32.129646 | orchestrator | 2026-01-01 01:19:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:32.130871 | orchestrator | 2026-01-01 01:19:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:32.130978 | orchestrator | 2026-01-01 01:19:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:35.181293 | orchestrator | 2026-01-01 01:19:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:35.184564 | orchestrator | 2026-01-01 01:19:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:35.184652 | orchestrator | 2026-01-01 01:19:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:38.234447 | orchestrator | 2026-01-01 01:19:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:38.237103 | orchestrator | 2026-01-01 01:19:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:38.237259 | orchestrator | 2026-01-01 01:19:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:41.279100 | orchestrator | 2026-01-01 01:19:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:41.280705 | orchestrator | 2026-01-01 01:19:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:41.280865 | orchestrator | 2026-01-01 01:19:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:44.326970 | orchestrator | 2026-01-01 01:19:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:44.328956 | orchestrator | 2026-01-01 01:19:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:44.329046 | orchestrator | 2026-01-01 01:19:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:47.383327 | orchestrator | 2026-01-01 01:19:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:47.384842 | orchestrator | 2026-01-01 01:19:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:47.384876 | orchestrator | 2026-01-01 01:19:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:50.436844 | orchestrator | 2026-01-01 01:19:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:50.437655 | orchestrator | 2026-01-01 01:19:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:50.437703 | orchestrator | 2026-01-01 01:19:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:53.488904 | orchestrator | 2026-01-01 01:19:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:53.492688 | orchestrator | 2026-01-01 01:19:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:53.492725 | orchestrator | 2026-01-01 01:19:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:56.542300 | orchestrator | 2026-01-01 01:19:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:56.543962 | orchestrator | 2026-01-01 01:19:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:56.544183 | orchestrator | 2026-01-01 01:19:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:59.591859 | orchestrator | 2026-01-01 01:19:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:19:59.592915 | orchestrator | 2026-01-01 01:19:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:19:59.592942 | orchestrator | 2026-01-01 01:19:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:02.635339 | orchestrator | 2026-01-01 01:20:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:02.636777 | orchestrator | 2026-01-01 01:20:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:02.636819 | orchestrator | 2026-01-01 01:20:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:05.692096 | orchestrator | 2026-01-01 01:20:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:05.693971 | orchestrator | 2026-01-01 01:20:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:05.694129 | orchestrator | 2026-01-01 01:20:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:08.743836 | orchestrator | 2026-01-01 01:20:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:08.745101 | orchestrator | 2026-01-01 01:20:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:08.745122 | orchestrator | 2026-01-01 01:20:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:11.793872 | orchestrator | 2026-01-01 01:20:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:11.793932 | orchestrator | 2026-01-01 01:20:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:11.793941 | orchestrator | 2026-01-01 01:20:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:14.850161 | orchestrator | 2026-01-01 01:20:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:14.851219 | orchestrator | 2026-01-01 01:20:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:14.851268 | orchestrator | 2026-01-01 01:20:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:17.903368 | orchestrator | 2026-01-01 01:20:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:17.905130 | orchestrator | 2026-01-01 01:20:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:17.905204 | orchestrator | 2026-01-01 01:20:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:20.951982 | orchestrator | 2026-01-01 01:20:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:20.953117 | orchestrator | 2026-01-01 01:20:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:20.953169 | orchestrator | 2026-01-01 01:20:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:24.007948 | orchestrator | 2026-01-01 01:20:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:24.009256 | orchestrator | 2026-01-01 01:20:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:24.009313 | orchestrator | 2026-01-01 01:20:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:27.053121 | orchestrator | 2026-01-01 01:20:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:27.054010 | orchestrator | 2026-01-01 01:20:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:27.054102 | orchestrator | 2026-01-01 01:20:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:30.104064 | orchestrator | 2026-01-01 01:20:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:30.105392 | orchestrator | 2026-01-01 01:20:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:30.105430 | orchestrator | 2026-01-01 01:20:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:33.155980 | orchestrator | 2026-01-01 01:20:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:33.158151 | orchestrator | 2026-01-01 01:20:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:33.158324 | orchestrator | 2026-01-01 01:20:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:36.210002 | orchestrator | 2026-01-01 01:20:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:36.212193 | orchestrator | 2026-01-01 01:20:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:36.212228 | orchestrator | 2026-01-01 01:20:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:39.263163 | orchestrator | 2026-01-01 01:20:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:39.265112 | orchestrator | 2026-01-01 01:20:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:39.265164 | orchestrator | 2026-01-01 01:20:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:42.315114 | orchestrator | 2026-01-01 01:20:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:42.316705 | orchestrator | 2026-01-01 01:20:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:42.316736 | orchestrator | 2026-01-01 01:20:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:45.361153 | orchestrator | 2026-01-01 01:20:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:45.362389 | orchestrator | 2026-01-01 01:20:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:45.362444 | orchestrator | 2026-01-01 01:20:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:48.415075 | orchestrator | 2026-01-01 01:20:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:48.417980 | orchestrator | 2026-01-01 01:20:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:48.418068 | orchestrator | 2026-01-01 01:20:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:51.467073 | orchestrator | 2026-01-01 01:20:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:51.468886 | orchestrator | 2026-01-01 01:20:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:51.468926 | orchestrator | 2026-01-01 01:20:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:54.523902 | orchestrator | 2026-01-01 01:20:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:54.526348 | orchestrator | 2026-01-01 01:20:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:54.526418 | orchestrator | 2026-01-01 01:20:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:57.578800 | orchestrator | 2026-01-01 01:20:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:20:57.580794 | orchestrator | 2026-01-01 01:20:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:20:57.580826 | orchestrator | 2026-01-01 01:20:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:00.630638 | orchestrator | 2026-01-01 01:21:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:00.633209 | orchestrator | 2026-01-01 01:21:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:00.633399 | orchestrator | 2026-01-01 01:21:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:03.684472 | orchestrator | 2026-01-01 01:21:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:03.686833 | orchestrator | 2026-01-01 01:21:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:03.686971 | orchestrator | 2026-01-01 01:21:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:06.734552 | orchestrator | 2026-01-01 01:21:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:06.736396 | orchestrator | 2026-01-01 01:21:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:06.736702 | orchestrator | 2026-01-01 01:21:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:09.785197 | orchestrator | 2026-01-01 01:21:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:09.786912 | orchestrator | 2026-01-01 01:21:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:09.786955 | orchestrator | 2026-01-01 01:21:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:12.839183 | orchestrator | 2026-01-01 01:21:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:12.841342 | orchestrator | 2026-01-01 01:21:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:12.841355 | orchestrator | 2026-01-01 01:21:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:15.892753 | orchestrator | 2026-01-01 01:21:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:15.896741 | orchestrator | 2026-01-01 01:21:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:15.897056 | orchestrator | 2026-01-01 01:21:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:18.943307 | orchestrator | 2026-01-01 01:21:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:18.944915 | orchestrator | 2026-01-01 01:21:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:18.944984 | orchestrator | 2026-01-01 01:21:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:21.991995 | orchestrator | 2026-01-01 01:21:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:21.993253 | orchestrator | 2026-01-01 01:21:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:21.993267 | orchestrator | 2026-01-01 01:21:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:25.046654 | orchestrator | 2026-01-01 01:21:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:25.051588 | orchestrator | 2026-01-01 01:21:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:25.051687 | orchestrator | 2026-01-01 01:21:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:28.100934 | orchestrator | 2026-01-01 01:21:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:28.102386 | orchestrator | 2026-01-01 01:21:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:28.102446 | orchestrator | 2026-01-01 01:21:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:31.148386 | orchestrator | 2026-01-01 01:21:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:31.150100 | orchestrator | 2026-01-01 01:21:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:31.150160 | orchestrator | 2026-01-01 01:21:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:34.201810 | orchestrator | 2026-01-01 01:21:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:34.205378 | orchestrator | 2026-01-01 01:21:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:34.205525 | orchestrator | 2026-01-01 01:21:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:37.242181 | orchestrator | 2026-01-01 01:21:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:37.243062 | orchestrator | 2026-01-01 01:21:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:37.243087 | orchestrator | 2026-01-01 01:21:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:40.290283 | orchestrator | 2026-01-01 01:21:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:40.291532 | orchestrator | 2026-01-01 01:21:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:40.291566 | orchestrator | 2026-01-01 01:21:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:43.349033 | orchestrator | 2026-01-01 01:21:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:43.351393 | orchestrator | 2026-01-01 01:21:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:43.351424 | orchestrator | 2026-01-01 01:21:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:46.391597 | orchestrator | 2026-01-01 01:21:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:46.392092 | orchestrator | 2026-01-01 01:21:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:46.392125 | orchestrator | 2026-01-01 01:21:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:49.441583 | orchestrator | 2026-01-01 01:21:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:49.444788 | orchestrator | 2026-01-01 01:21:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:49.444863 | orchestrator | 2026-01-01 01:21:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:52.496411 | orchestrator | 2026-01-01 01:21:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:52.498837 | orchestrator | 2026-01-01 01:21:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:52.498876 | orchestrator | 2026-01-01 01:21:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:55.547825 | orchestrator | 2026-01-01 01:21:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:55.549258 | orchestrator | 2026-01-01 01:21:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:55.549282 | orchestrator | 2026-01-01 01:21:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:58.600289 | orchestrator | 2026-01-01 01:21:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:21:58.603225 | orchestrator | 2026-01-01 01:21:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:21:58.603366 | orchestrator | 2026-01-01 01:21:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:01.655436 | orchestrator | 2026-01-01 01:22:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:01.657111 | orchestrator | 2026-01-01 01:22:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:01.657166 | orchestrator | 2026-01-01 01:22:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:04.709839 | orchestrator | 2026-01-01 01:22:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:04.711358 | orchestrator | 2026-01-01 01:22:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:04.711964 | orchestrator | 2026-01-01 01:22:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:07.760055 | orchestrator | 2026-01-01 01:22:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:07.763803 | orchestrator | 2026-01-01 01:22:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:07.763952 | orchestrator | 2026-01-01 01:22:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:10.815109 | orchestrator | 2026-01-01 01:22:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:10.816782 | orchestrator | 2026-01-01 01:22:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:10.816858 | orchestrator | 2026-01-01 01:22:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:13.868940 | orchestrator | 2026-01-01 01:22:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:13.870361 | orchestrator | 2026-01-01 01:22:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:13.870408 | orchestrator | 2026-01-01 01:22:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:16.924434 | orchestrator | 2026-01-01 01:22:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:16.926981 | orchestrator | 2026-01-01 01:22:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:16.927051 | orchestrator | 2026-01-01 01:22:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:19.977337 | orchestrator | 2026-01-01 01:22:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:19.980461 | orchestrator | 2026-01-01 01:22:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:19.980522 | orchestrator | 2026-01-01 01:22:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:23.027014 | orchestrator | 2026-01-01 01:22:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:23.028641 | orchestrator | 2026-01-01 01:22:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:23.028730 | orchestrator | 2026-01-01 01:22:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:26.082247 | orchestrator | 2026-01-01 01:22:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:26.084158 | orchestrator | 2026-01-01 01:22:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:26.084631 | orchestrator | 2026-01-01 01:22:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:29.137242 | orchestrator | 2026-01-01 01:22:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:29.138265 | orchestrator | 2026-01-01 01:22:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:29.138413 | orchestrator | 2026-01-01 01:22:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:32.185819 | orchestrator | 2026-01-01 01:22:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:32.188125 | orchestrator | 2026-01-01 01:22:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:32.188182 | orchestrator | 2026-01-01 01:22:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:35.235828 | orchestrator | 2026-01-01 01:22:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:35.238912 | orchestrator | 2026-01-01 01:22:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:35.239072 | orchestrator | 2026-01-01 01:22:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:38.288608 | orchestrator | 2026-01-01 01:22:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:38.290277 | orchestrator | 2026-01-01 01:22:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:38.290538 | orchestrator | 2026-01-01 01:22:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:41.332819 | orchestrator | 2026-01-01 01:22:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:41.334873 | orchestrator | 2026-01-01 01:22:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:41.334913 | orchestrator | 2026-01-01 01:22:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:44.390357 | orchestrator | 2026-01-01 01:22:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:44.392124 | orchestrator | 2026-01-01 01:22:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:44.392159 | orchestrator | 2026-01-01 01:22:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:47.444929 | orchestrator | 2026-01-01 01:22:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:47.446125 | orchestrator | 2026-01-01 01:22:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:47.446167 | orchestrator | 2026-01-01 01:22:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:50.490432 | orchestrator | 2026-01-01 01:22:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:50.492105 | orchestrator | 2026-01-01 01:22:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:50.492216 | orchestrator | 2026-01-01 01:22:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:53.534478 | orchestrator | 2026-01-01 01:22:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:53.536347 | orchestrator | 2026-01-01 01:22:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:53.536391 | orchestrator | 2026-01-01 01:22:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:56.585449 | orchestrator | 2026-01-01 01:22:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:56.588147 | orchestrator | 2026-01-01 01:22:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:56.588337 | orchestrator | 2026-01-01 01:22:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:59.634084 | orchestrator | 2026-01-01 01:22:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:22:59.636464 | orchestrator | 2026-01-01 01:22:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:22:59.636806 | orchestrator | 2026-01-01 01:22:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:02.679940 | orchestrator | 2026-01-01 01:23:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:02.682495 | orchestrator | 2026-01-01 01:23:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:02.682852 | orchestrator | 2026-01-01 01:23:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:05.733913 | orchestrator | 2026-01-01 01:23:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:05.736128 | orchestrator | 2026-01-01 01:23:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:05.736160 | orchestrator | 2026-01-01 01:23:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:08.787711 | orchestrator | 2026-01-01 01:23:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:08.788880 | orchestrator | 2026-01-01 01:23:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:08.789075 | orchestrator | 2026-01-01 01:23:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:11.843250 | orchestrator | 2026-01-01 01:23:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:11.844054 | orchestrator | 2026-01-01 01:23:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:11.844132 | orchestrator | 2026-01-01 01:23:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:14.898811 | orchestrator | 2026-01-01 01:23:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:14.902378 | orchestrator | 2026-01-01 01:23:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:14.902470 | orchestrator | 2026-01-01 01:23:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:17.953220 | orchestrator | 2026-01-01 01:23:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:17.955361 | orchestrator | 2026-01-01 01:23:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:17.955402 | orchestrator | 2026-01-01 01:23:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:21.020221 | orchestrator | 2026-01-01 01:23:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:21.021107 | orchestrator | 2026-01-01 01:23:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:21.021186 | orchestrator | 2026-01-01 01:23:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:24.074534 | orchestrator | 2026-01-01 01:23:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:24.077193 | orchestrator | 2026-01-01 01:23:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:24.077650 | orchestrator | 2026-01-01 01:23:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:27.120313 | orchestrator | 2026-01-01 01:23:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:27.122328 | orchestrator | 2026-01-01 01:23:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:27.122365 | orchestrator | 2026-01-01 01:23:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:30.168389 | orchestrator | 2026-01-01 01:23:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:30.169360 | orchestrator | 2026-01-01 01:23:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:30.169418 | orchestrator | 2026-01-01 01:23:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:33.222196 | orchestrator | 2026-01-01 01:23:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:33.224501 | orchestrator | 2026-01-01 01:23:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:33.224576 | orchestrator | 2026-01-01 01:23:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:36.272630 | orchestrator | 2026-01-01 01:23:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:36.274402 | orchestrator | 2026-01-01 01:23:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:36.274425 | orchestrator | 2026-01-01 01:23:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:39.322752 | orchestrator | 2026-01-01 01:23:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:39.323619 | orchestrator | 2026-01-01 01:23:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:39.323652 | orchestrator | 2026-01-01 01:23:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:42.374345 | orchestrator | 2026-01-01 01:23:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:42.375570 | orchestrator | 2026-01-01 01:23:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:42.375615 | orchestrator | 2026-01-01 01:23:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:45.422764 | orchestrator | 2026-01-01 01:23:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:45.423738 | orchestrator | 2026-01-01 01:23:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:45.423882 | orchestrator | 2026-01-01 01:23:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:48.473428 | orchestrator | 2026-01-01 01:23:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:48.476970 | orchestrator | 2026-01-01 01:23:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:48.477040 | orchestrator | 2026-01-01 01:23:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:51.530308 | orchestrator | 2026-01-01 01:23:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:51.532185 | orchestrator | 2026-01-01 01:23:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:51.532275 | orchestrator | 2026-01-01 01:23:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:54.581948 | orchestrator | 2026-01-01 01:23:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:54.585330 | orchestrator | 2026-01-01 01:23:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:54.585384 | orchestrator | 2026-01-01 01:23:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:57.639964 | orchestrator | 2026-01-01 01:23:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:23:57.642803 | orchestrator | 2026-01-01 01:23:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:23:57.642860 | orchestrator | 2026-01-01 01:23:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:00.691428 | orchestrator | 2026-01-01 01:24:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:00.693561 | orchestrator | 2026-01-01 01:24:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:00.693843 | orchestrator | 2026-01-01 01:24:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:03.742460 | orchestrator | 2026-01-01 01:24:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:03.744134 | orchestrator | 2026-01-01 01:24:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:03.744202 | orchestrator | 2026-01-01 01:24:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:06.790844 | orchestrator | 2026-01-01 01:24:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:06.793263 | orchestrator | 2026-01-01 01:24:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:06.793312 | orchestrator | 2026-01-01 01:24:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:09.837061 | orchestrator | 2026-01-01 01:24:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:09.839170 | orchestrator | 2026-01-01 01:24:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:09.839278 | orchestrator | 2026-01-01 01:24:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:12.887952 | orchestrator | 2026-01-01 01:24:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:12.890972 | orchestrator | 2026-01-01 01:24:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:12.891022 | orchestrator | 2026-01-01 01:24:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:15.947576 | orchestrator | 2026-01-01 01:24:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:15.950009 | orchestrator | 2026-01-01 01:24:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:15.950134 | orchestrator | 2026-01-01 01:24:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:19.006961 | orchestrator | 2026-01-01 01:24:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:19.008768 | orchestrator | 2026-01-01 01:24:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:19.008881 | orchestrator | 2026-01-01 01:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:22.075696 | orchestrator | 2026-01-01 01:24:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:22.076774 | orchestrator | 2026-01-01 01:24:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:22.076852 | orchestrator | 2026-01-01 01:24:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:25.128714 | orchestrator | 2026-01-01 01:24:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:25.131588 | orchestrator | 2026-01-01 01:24:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:25.131612 | orchestrator | 2026-01-01 01:24:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:28.183035 | orchestrator | 2026-01-01 01:24:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:28.184216 | orchestrator | 2026-01-01 01:24:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:28.184248 | orchestrator | 2026-01-01 01:24:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:31.236510 | orchestrator | 2026-01-01 01:24:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:31.238936 | orchestrator | 2026-01-01 01:24:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:31.238976 | orchestrator | 2026-01-01 01:24:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:34.285875 | orchestrator | 2026-01-01 01:24:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:34.289638 | orchestrator | 2026-01-01 01:24:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:34.289703 | orchestrator | 2026-01-01 01:24:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:37.336734 | orchestrator | 2026-01-01 01:24:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:37.338221 | orchestrator | 2026-01-01 01:24:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:37.338298 | orchestrator | 2026-01-01 01:24:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:40.392262 | orchestrator | 2026-01-01 01:24:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:40.396154 | orchestrator | 2026-01-01 01:24:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:40.396235 | orchestrator | 2026-01-01 01:24:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:43.448452 | orchestrator | 2026-01-01 01:24:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:43.450998 | orchestrator | 2026-01-01 01:24:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:43.451102 | orchestrator | 2026-01-01 01:24:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:46.499097 | orchestrator | 2026-01-01 01:24:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:46.500280 | orchestrator | 2026-01-01 01:24:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:46.500309 | orchestrator | 2026-01-01 01:24:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:49.547851 | orchestrator | 2026-01-01 01:24:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:49.549223 | orchestrator | 2026-01-01 01:24:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:49.549252 | orchestrator | 2026-01-01 01:24:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:52.597702 | orchestrator | 2026-01-01 01:24:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:52.599785 | orchestrator | 2026-01-01 01:24:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:52.599898 | orchestrator | 2026-01-01 01:24:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:55.650403 | orchestrator | 2026-01-01 01:24:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:55.651925 | orchestrator | 2026-01-01 01:24:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:55.651987 | orchestrator | 2026-01-01 01:24:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:58.702144 | orchestrator | 2026-01-01 01:24:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:24:58.704589 | orchestrator | 2026-01-01 01:24:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:24:58.704670 | orchestrator | 2026-01-01 01:24:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:01.756356 | orchestrator | 2026-01-01 01:25:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:01.758372 | orchestrator | 2026-01-01 01:25:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:01.758694 | orchestrator | 2026-01-01 01:25:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:04.808597 | orchestrator | 2026-01-01 01:25:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:04.811496 | orchestrator | 2026-01-01 01:25:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:04.811620 | orchestrator | 2026-01-01 01:25:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:07.863220 | orchestrator | 2026-01-01 01:25:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:07.865435 | orchestrator | 2026-01-01 01:25:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:07.865508 | orchestrator | 2026-01-01 01:25:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:10.916760 | orchestrator | 2026-01-01 01:25:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:10.919253 | orchestrator | 2026-01-01 01:25:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:10.919300 | orchestrator | 2026-01-01 01:25:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:13.965559 | orchestrator | 2026-01-01 01:25:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:13.966266 | orchestrator | 2026-01-01 01:25:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:13.966308 | orchestrator | 2026-01-01 01:25:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:17.018598 | orchestrator | 2026-01-01 01:25:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:17.020457 | orchestrator | 2026-01-01 01:25:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:17.020635 | orchestrator | 2026-01-01 01:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:20.069870 | orchestrator | 2026-01-01 01:25:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:20.070137 | orchestrator | 2026-01-01 01:25:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:20.070161 | orchestrator | 2026-01-01 01:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:23.113701 | orchestrator | 2026-01-01 01:25:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:23.115281 | orchestrator | 2026-01-01 01:25:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:23.115316 | orchestrator | 2026-01-01 01:25:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:26.169955 | orchestrator | 2026-01-01 01:25:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:26.171126 | orchestrator | 2026-01-01 01:25:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:26.171214 | orchestrator | 2026-01-01 01:25:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:29.221568 | orchestrator | 2026-01-01 01:25:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:29.223691 | orchestrator | 2026-01-01 01:25:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:29.223737 | orchestrator | 2026-01-01 01:25:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:32.271764 | orchestrator | 2026-01-01 01:25:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:32.274723 | orchestrator | 2026-01-01 01:25:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:32.274804 | orchestrator | 2026-01-01 01:25:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:35.317594 | orchestrator | 2026-01-01 01:25:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:35.320954 | orchestrator | 2026-01-01 01:25:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:35.320984 | orchestrator | 2026-01-01 01:25:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:38.372038 | orchestrator | 2026-01-01 01:25:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:38.373322 | orchestrator | 2026-01-01 01:25:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:38.373415 | orchestrator | 2026-01-01 01:25:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:41.429893 | orchestrator | 2026-01-01 01:25:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:41.432087 | orchestrator | 2026-01-01 01:25:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:41.432145 | orchestrator | 2026-01-01 01:25:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:44.486596 | orchestrator | 2026-01-01 01:25:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:44.488003 | orchestrator | 2026-01-01 01:25:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:44.488067 | orchestrator | 2026-01-01 01:25:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:47.532274 | orchestrator | 2026-01-01 01:25:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:47.533322 | orchestrator | 2026-01-01 01:25:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:47.533359 | orchestrator | 2026-01-01 01:25:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:50.579996 | orchestrator | 2026-01-01 01:25:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:50.580251 | orchestrator | 2026-01-01 01:25:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:50.580509 | orchestrator | 2026-01-01 01:25:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:53.635679 | orchestrator | 2026-01-01 01:25:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:53.637889 | orchestrator | 2026-01-01 01:25:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:53.638083 | orchestrator | 2026-01-01 01:25:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:56.691567 | orchestrator | 2026-01-01 01:25:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:56.694806 | orchestrator | 2026-01-01 01:25:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:56.694971 | orchestrator | 2026-01-01 01:25:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:59.738510 | orchestrator | 2026-01-01 01:25:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:25:59.740277 | orchestrator | 2026-01-01 01:25:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:25:59.740781 | orchestrator | 2026-01-01 01:25:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:02.787276 | orchestrator | 2026-01-01 01:26:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:02.788158 | orchestrator | 2026-01-01 01:26:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:02.788407 | orchestrator | 2026-01-01 01:26:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:05.838350 | orchestrator | 2026-01-01 01:26:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:05.840184 | orchestrator | 2026-01-01 01:26:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:05.840219 | orchestrator | 2026-01-01 01:26:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:08.886446 | orchestrator | 2026-01-01 01:26:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:08.888306 | orchestrator | 2026-01-01 01:26:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:08.888337 | orchestrator | 2026-01-01 01:26:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:11.938598 | orchestrator | 2026-01-01 01:26:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:11.940073 | orchestrator | 2026-01-01 01:26:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:11.940545 | orchestrator | 2026-01-01 01:26:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:14.990517 | orchestrator | 2026-01-01 01:26:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:14.992022 | orchestrator | 2026-01-01 01:26:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:14.992066 | orchestrator | 2026-01-01 01:26:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:18.041002 | orchestrator | 2026-01-01 01:26:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:18.044418 | orchestrator | 2026-01-01 01:26:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:18.044568 | orchestrator | 2026-01-01 01:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:21.091339 | orchestrator | 2026-01-01 01:26:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:21.092914 | orchestrator | 2026-01-01 01:26:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:21.093046 | orchestrator | 2026-01-01 01:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:24.136785 | orchestrator | 2026-01-01 01:26:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:24.139574 | orchestrator | 2026-01-01 01:26:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:24.139741 | orchestrator | 2026-01-01 01:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:27.188745 | orchestrator | 2026-01-01 01:26:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:27.190198 | orchestrator | 2026-01-01 01:26:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:27.190235 | orchestrator | 2026-01-01 01:26:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:30.232554 | orchestrator | 2026-01-01 01:26:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:30.233892 | orchestrator | 2026-01-01 01:26:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:30.234084 | orchestrator | 2026-01-01 01:26:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:33.284818 | orchestrator | 2026-01-01 01:26:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:33.287598 | orchestrator | 2026-01-01 01:26:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:33.288050 | orchestrator | 2026-01-01 01:26:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:36.339143 | orchestrator | 2026-01-01 01:26:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:36.341761 | orchestrator | 2026-01-01 01:26:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:36.341798 | orchestrator | 2026-01-01 01:26:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:39.390066 | orchestrator | 2026-01-01 01:26:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:39.392215 | orchestrator | 2026-01-01 01:26:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:39.392519 | orchestrator | 2026-01-01 01:26:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:42.442507 | orchestrator | 2026-01-01 01:26:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:42.446535 | orchestrator | 2026-01-01 01:26:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:42.446633 | orchestrator | 2026-01-01 01:26:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:45.499182 | orchestrator | 2026-01-01 01:26:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:45.500259 | orchestrator | 2026-01-01 01:26:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:45.500290 | orchestrator | 2026-01-01 01:26:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:48.555820 | orchestrator | 2026-01-01 01:26:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:48.560495 | orchestrator | 2026-01-01 01:26:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:48.560592 | orchestrator | 2026-01-01 01:26:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:51.608936 | orchestrator | 2026-01-01 01:26:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:51.610627 | orchestrator | 2026-01-01 01:26:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:51.610675 | orchestrator | 2026-01-01 01:26:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:54.667676 | orchestrator | 2026-01-01 01:26:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:54.670340 | orchestrator | 2026-01-01 01:26:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:54.670450 | orchestrator | 2026-01-01 01:26:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:57.726613 | orchestrator | 2026-01-01 01:26:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:26:57.731782 | orchestrator | 2026-01-01 01:26:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:26:57.732327 | orchestrator | 2026-01-01 01:26:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:00.786745 | orchestrator | 2026-01-01 01:27:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:00.788728 | orchestrator | 2026-01-01 01:27:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:00.788809 | orchestrator | 2026-01-01 01:27:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:03.831338 | orchestrator | 2026-01-01 01:27:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:03.833714 | orchestrator | 2026-01-01 01:27:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:03.833757 | orchestrator | 2026-01-01 01:27:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:06.884590 | orchestrator | 2026-01-01 01:27:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:06.886262 | orchestrator | 2026-01-01 01:27:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:06.886289 | orchestrator | 2026-01-01 01:27:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:09.938242 | orchestrator | 2026-01-01 01:27:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:09.940083 | orchestrator | 2026-01-01 01:27:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:09.940130 | orchestrator | 2026-01-01 01:27:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:12.997534 | orchestrator | 2026-01-01 01:27:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:12.999554 | orchestrator | 2026-01-01 01:27:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:12.999619 | orchestrator | 2026-01-01 01:27:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:16.050626 | orchestrator | 2026-01-01 01:27:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:16.056589 | orchestrator | 2026-01-01 01:27:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:16.056634 | orchestrator | 2026-01-01 01:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:19.108115 | orchestrator | 2026-01-01 01:27:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:19.108585 | orchestrator | 2026-01-01 01:27:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:19.108616 | orchestrator | 2026-01-01 01:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:22.164062 | orchestrator | 2026-01-01 01:27:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:22.167142 | orchestrator | 2026-01-01 01:27:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:22.167293 | orchestrator | 2026-01-01 01:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:25.213534 | orchestrator | 2026-01-01 01:27:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:25.214725 | orchestrator | 2026-01-01 01:27:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:25.214767 | orchestrator | 2026-01-01 01:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:28.265562 | orchestrator | 2026-01-01 01:27:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:28.267765 | orchestrator | 2026-01-01 01:27:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:28.267808 | orchestrator | 2026-01-01 01:27:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:31.315418 | orchestrator | 2026-01-01 01:27:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:31.317096 | orchestrator | 2026-01-01 01:27:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:31.317168 | orchestrator | 2026-01-01 01:27:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:34.367047 | orchestrator | 2026-01-01 01:27:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:34.370313 | orchestrator | 2026-01-01 01:27:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:34.370491 | orchestrator | 2026-01-01 01:27:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:37.418993 | orchestrator | 2026-01-01 01:27:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:37.421420 | orchestrator | 2026-01-01 01:27:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:37.421469 | orchestrator | 2026-01-01 01:27:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:40.468518 | orchestrator | 2026-01-01 01:27:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:40.470227 | orchestrator | 2026-01-01 01:27:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:40.471020 | orchestrator | 2026-01-01 01:27:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:43.520997 | orchestrator | 2026-01-01 01:27:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:43.523177 | orchestrator | 2026-01-01 01:27:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:43.523225 | orchestrator | 2026-01-01 01:27:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:46.568596 | orchestrator | 2026-01-01 01:27:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:46.569615 | orchestrator | 2026-01-01 01:27:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:46.569646 | orchestrator | 2026-01-01 01:27:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:49.621213 | orchestrator | 2026-01-01 01:27:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:49.623125 | orchestrator | 2026-01-01 01:27:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:49.623155 | orchestrator | 2026-01-01 01:27:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:52.670863 | orchestrator | 2026-01-01 01:27:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:52.672583 | orchestrator | 2026-01-01 01:27:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:52.672715 | orchestrator | 2026-01-01 01:27:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:55.720525 | orchestrator | 2026-01-01 01:27:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:55.721464 | orchestrator | 2026-01-01 01:27:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:55.721505 | orchestrator | 2026-01-01 01:27:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:58.773522 | orchestrator | 2026-01-01 01:27:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:27:58.776948 | orchestrator | 2026-01-01 01:27:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:27:58.777148 | orchestrator | 2026-01-01 01:27:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:01.822429 | orchestrator | 2026-01-01 01:28:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:01.824642 | orchestrator | 2026-01-01 01:28:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:01.824729 | orchestrator | 2026-01-01 01:28:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:04.871001 | orchestrator | 2026-01-01 01:28:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:04.873718 | orchestrator | 2026-01-01 01:28:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:04.873850 | orchestrator | 2026-01-01 01:28:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:07.924739 | orchestrator | 2026-01-01 01:28:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:07.926338 | orchestrator | 2026-01-01 01:28:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:07.926431 | orchestrator | 2026-01-01 01:28:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:10.979365 | orchestrator | 2026-01-01 01:28:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:10.981646 | orchestrator | 2026-01-01 01:28:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:10.981800 | orchestrator | 2026-01-01 01:28:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:14.027815 | orchestrator | 2026-01-01 01:28:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:14.031702 | orchestrator | 2026-01-01 01:28:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:14.031760 | orchestrator | 2026-01-01 01:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:17.077759 | orchestrator | 2026-01-01 01:28:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:17.080793 | orchestrator | 2026-01-01 01:28:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:17.080849 | orchestrator | 2026-01-01 01:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:20.130242 | orchestrator | 2026-01-01 01:28:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:20.130543 | orchestrator | 2026-01-01 01:28:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:20.130762 | orchestrator | 2026-01-01 01:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:23.179499 | orchestrator | 2026-01-01 01:28:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:23.180616 | orchestrator | 2026-01-01 01:28:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:23.180969 | orchestrator | 2026-01-01 01:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:26.231282 | orchestrator | 2026-01-01 01:28:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:26.233567 | orchestrator | 2026-01-01 01:28:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:26.233625 | orchestrator | 2026-01-01 01:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:29.288375 | orchestrator | 2026-01-01 01:28:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:29.289487 | orchestrator | 2026-01-01 01:28:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:29.289521 | orchestrator | 2026-01-01 01:28:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:32.333573 | orchestrator | 2026-01-01 01:28:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:32.335289 | orchestrator | 2026-01-01 01:28:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:32.335372 | orchestrator | 2026-01-01 01:28:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:35.383663 | orchestrator | 2026-01-01 01:28:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:35.385881 | orchestrator | 2026-01-01 01:28:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:35.385998 | orchestrator | 2026-01-01 01:28:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:38.430415 | orchestrator | 2026-01-01 01:28:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:38.432787 | orchestrator | 2026-01-01 01:28:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:38.432845 | orchestrator | 2026-01-01 01:28:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:41.483274 | orchestrator | 2026-01-01 01:28:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:41.485166 | orchestrator | 2026-01-01 01:28:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:41.485192 | orchestrator | 2026-01-01 01:28:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:44.533171 | orchestrator | 2026-01-01 01:28:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:44.536104 | orchestrator | 2026-01-01 01:28:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:44.536383 | orchestrator | 2026-01-01 01:28:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:47.580354 | orchestrator | 2026-01-01 01:28:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:47.582494 | orchestrator | 2026-01-01 01:28:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:47.582549 | orchestrator | 2026-01-01 01:28:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:50.637056 | orchestrator | 2026-01-01 01:28:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:50.638680 | orchestrator | 2026-01-01 01:28:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:50.638721 | orchestrator | 2026-01-01 01:28:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:53.686756 | orchestrator | 2026-01-01 01:28:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:53.688747 | orchestrator | 2026-01-01 01:28:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:53.688960 | orchestrator | 2026-01-01 01:28:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:56.733982 | orchestrator | 2026-01-01 01:28:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:56.734989 | orchestrator | 2026-01-01 01:28:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:56.735046 | orchestrator | 2026-01-01 01:28:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:59.786241 | orchestrator | 2026-01-01 01:28:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:28:59.788399 | orchestrator | 2026-01-01 01:28:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:28:59.788494 | orchestrator | 2026-01-01 01:28:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:02.835207 | orchestrator | 2026-01-01 01:29:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:02.838097 | orchestrator | 2026-01-01 01:29:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:02.838183 | orchestrator | 2026-01-01 01:29:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:05.889468 | orchestrator | 2026-01-01 01:29:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:05.892988 | orchestrator | 2026-01-01 01:29:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:05.893157 | orchestrator | 2026-01-01 01:29:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:08.946115 | orchestrator | 2026-01-01 01:29:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:08.948715 | orchestrator | 2026-01-01 01:29:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:08.948765 | orchestrator | 2026-01-01 01:29:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:12.001044 | orchestrator | 2026-01-01 01:29:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:12.002628 | orchestrator | 2026-01-01 01:29:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:12.002662 | orchestrator | 2026-01-01 01:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:15.058605 | orchestrator | 2026-01-01 01:29:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:15.059834 | orchestrator | 2026-01-01 01:29:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:15.059928 | orchestrator | 2026-01-01 01:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:18.102525 | orchestrator | 2026-01-01 01:29:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:18.103949 | orchestrator | 2026-01-01 01:29:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:18.103988 | orchestrator | 2026-01-01 01:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:21.154733 | orchestrator | 2026-01-01 01:29:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:21.156724 | orchestrator | 2026-01-01 01:29:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:21.156771 | orchestrator | 2026-01-01 01:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:24.201605 | orchestrator | 2026-01-01 01:29:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:24.203173 | orchestrator | 2026-01-01 01:29:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:24.203327 | orchestrator | 2026-01-01 01:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:27.253546 | orchestrator | 2026-01-01 01:29:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:27.257133 | orchestrator | 2026-01-01 01:29:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:27.257205 | orchestrator | 2026-01-01 01:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:30.309296 | orchestrator | 2026-01-01 01:29:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:30.310506 | orchestrator | 2026-01-01 01:29:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:30.310585 | orchestrator | 2026-01-01 01:29:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:33.354097 | orchestrator | 2026-01-01 01:29:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:33.356272 | orchestrator | 2026-01-01 01:29:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:33.356328 | orchestrator | 2026-01-01 01:29:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:36.401504 | orchestrator | 2026-01-01 01:29:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:36.402384 | orchestrator | 2026-01-01 01:29:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:36.402416 | orchestrator | 2026-01-01 01:29:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:39.455165 | orchestrator | 2026-01-01 01:29:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:39.457859 | orchestrator | 2026-01-01 01:29:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:39.457960 | orchestrator | 2026-01-01 01:29:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:42.515371 | orchestrator | 2026-01-01 01:29:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:42.517090 | orchestrator | 2026-01-01 01:29:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:42.517200 | orchestrator | 2026-01-01 01:29:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:45.561377 | orchestrator | 2026-01-01 01:29:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:45.562233 | orchestrator | 2026-01-01 01:29:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:45.562279 | orchestrator | 2026-01-01 01:29:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:48.612188 | orchestrator | 2026-01-01 01:29:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:48.613966 | orchestrator | 2026-01-01 01:29:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:48.613986 | orchestrator | 2026-01-01 01:29:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:51.663071 | orchestrator | 2026-01-01 01:29:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:51.665840 | orchestrator | 2026-01-01 01:29:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:51.665952 | orchestrator | 2026-01-01 01:29:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:54.719772 | orchestrator | 2026-01-01 01:29:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:54.721602 | orchestrator | 2026-01-01 01:29:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:54.721663 | orchestrator | 2026-01-01 01:29:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:57.770303 | orchestrator | 2026-01-01 01:29:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:29:57.771804 | orchestrator | 2026-01-01 01:29:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:29:57.771855 | orchestrator | 2026-01-01 01:29:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:00.826358 | orchestrator | 2026-01-01 01:30:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:00.827745 | orchestrator | 2026-01-01 01:30:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:00.829532 | orchestrator | 2026-01-01 01:30:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:03.875904 | orchestrator | 2026-01-01 01:30:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:03.877023 | orchestrator | 2026-01-01 01:30:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:03.877201 | orchestrator | 2026-01-01 01:30:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:06.920079 | orchestrator | 2026-01-01 01:30:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:06.922401 | orchestrator | 2026-01-01 01:30:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:06.922528 | orchestrator | 2026-01-01 01:30:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:09.970212 | orchestrator | 2026-01-01 01:30:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:09.972685 | orchestrator | 2026-01-01 01:30:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:09.972747 | orchestrator | 2026-01-01 01:30:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:13.023902 | orchestrator | 2026-01-01 01:30:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:13.026968 | orchestrator | 2026-01-01 01:30:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:13.027053 | orchestrator | 2026-01-01 01:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:16.070895 | orchestrator | 2026-01-01 01:30:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:16.073819 | orchestrator | 2026-01-01 01:30:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:16.073875 | orchestrator | 2026-01-01 01:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:19.118779 | orchestrator | 2026-01-01 01:30:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:19.120856 | orchestrator | 2026-01-01 01:30:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:19.120885 | orchestrator | 2026-01-01 01:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:22.168316 | orchestrator | 2026-01-01 01:30:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:22.169377 | orchestrator | 2026-01-01 01:30:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:22.169435 | orchestrator | 2026-01-01 01:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:25.219126 | orchestrator | 2026-01-01 01:30:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:25.219831 | orchestrator | 2026-01-01 01:30:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:25.219863 | orchestrator | 2026-01-01 01:30:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:28.266240 | orchestrator | 2026-01-01 01:30:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:28.266346 | orchestrator | 2026-01-01 01:30:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:28.266362 | orchestrator | 2026-01-01 01:30:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:31.326823 | orchestrator | 2026-01-01 01:30:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:31.327619 | orchestrator | 2026-01-01 01:30:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:31.327656 | orchestrator | 2026-01-01 01:30:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:34.379177 | orchestrator | 2026-01-01 01:30:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:34.381345 | orchestrator | 2026-01-01 01:30:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:34.381393 | orchestrator | 2026-01-01 01:30:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:37.434740 | orchestrator | 2026-01-01 01:30:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:37.437299 | orchestrator | 2026-01-01 01:30:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:37.437351 | orchestrator | 2026-01-01 01:30:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:40.489594 | orchestrator | 2026-01-01 01:30:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:40.490527 | orchestrator | 2026-01-01 01:30:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:40.490568 | orchestrator | 2026-01-01 01:30:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:43.540758 | orchestrator | 2026-01-01 01:30:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:43.542865 | orchestrator | 2026-01-01 01:30:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:43.542924 | orchestrator | 2026-01-01 01:30:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:46.592184 | orchestrator | 2026-01-01 01:30:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:46.593320 | orchestrator | 2026-01-01 01:30:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:46.593377 | orchestrator | 2026-01-01 01:30:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:49.642206 | orchestrator | 2026-01-01 01:30:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:49.643359 | orchestrator | 2026-01-01 01:30:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:49.643461 | orchestrator | 2026-01-01 01:30:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:52.695079 | orchestrator | 2026-01-01 01:30:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:52.695565 | orchestrator | 2026-01-01 01:30:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:52.695612 | orchestrator | 2026-01-01 01:30:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:55.747445 | orchestrator | 2026-01-01 01:30:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:55.749452 | orchestrator | 2026-01-01 01:30:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:55.749503 | orchestrator | 2026-01-01 01:30:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:58.801373 | orchestrator | 2026-01-01 01:30:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:30:58.802260 | orchestrator | 2026-01-01 01:30:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:30:58.802311 | orchestrator | 2026-01-01 01:30:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:01.854758 | orchestrator | 2026-01-01 01:31:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:01.856854 | orchestrator | 2026-01-01 01:31:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:01.857114 | orchestrator | 2026-01-01 01:31:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:04.909610 | orchestrator | 2026-01-01 01:31:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:04.910399 | orchestrator | 2026-01-01 01:31:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:04.910789 | orchestrator | 2026-01-01 01:31:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:07.963931 | orchestrator | 2026-01-01 01:31:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:07.966352 | orchestrator | 2026-01-01 01:31:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:07.966434 | orchestrator | 2026-01-01 01:31:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:11.024386 | orchestrator | 2026-01-01 01:31:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:11.025491 | orchestrator | 2026-01-01 01:31:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:11.025573 | orchestrator | 2026-01-01 01:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:14.067147 | orchestrator | 2026-01-01 01:31:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:14.069511 | orchestrator | 2026-01-01 01:31:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:14.069662 | orchestrator | 2026-01-01 01:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:17.112127 | orchestrator | 2026-01-01 01:31:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:17.113555 | orchestrator | 2026-01-01 01:31:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:17.113606 | orchestrator | 2026-01-01 01:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:20.160461 | orchestrator | 2026-01-01 01:31:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:20.162734 | orchestrator | 2026-01-01 01:31:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:20.162843 | orchestrator | 2026-01-01 01:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:23.215509 | orchestrator | 2026-01-01 01:31:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:23.216626 | orchestrator | 2026-01-01 01:31:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:23.216750 | orchestrator | 2026-01-01 01:31:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:26.271212 | orchestrator | 2026-01-01 01:31:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:26.273129 | orchestrator | 2026-01-01 01:31:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:26.273182 | orchestrator | 2026-01-01 01:31:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:29.325274 | orchestrator | 2026-01-01 01:31:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:29.327265 | orchestrator | 2026-01-01 01:31:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:29.327306 | orchestrator | 2026-01-01 01:31:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:32.377301 | orchestrator | 2026-01-01 01:31:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:32.377611 | orchestrator | 2026-01-01 01:31:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:32.377641 | orchestrator | 2026-01-01 01:31:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:35.430367 | orchestrator | 2026-01-01 01:31:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:35.432049 | orchestrator | 2026-01-01 01:31:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:35.432091 | orchestrator | 2026-01-01 01:31:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:38.487367 | orchestrator | 2026-01-01 01:31:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:38.490207 | orchestrator | 2026-01-01 01:31:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:38.490286 | orchestrator | 2026-01-01 01:31:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:41.531786 | orchestrator | 2026-01-01 01:31:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:41.533496 | orchestrator | 2026-01-01 01:31:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:41.533586 | orchestrator | 2026-01-01 01:31:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:44.583519 | orchestrator | 2026-01-01 01:31:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:44.586557 | orchestrator | 2026-01-01 01:31:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:44.586612 | orchestrator | 2026-01-01 01:31:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:47.629404 | orchestrator | 2026-01-01 01:31:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:47.631304 | orchestrator | 2026-01-01 01:31:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:47.631344 | orchestrator | 2026-01-01 01:31:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:50.682349 | orchestrator | 2026-01-01 01:31:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:50.683850 | orchestrator | 2026-01-01 01:31:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:50.684034 | orchestrator | 2026-01-01 01:31:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:53.734175 | orchestrator | 2026-01-01 01:31:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:53.737440 | orchestrator | 2026-01-01 01:31:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:53.737479 | orchestrator | 2026-01-01 01:31:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:56.789094 | orchestrator | 2026-01-01 01:31:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:56.791660 | orchestrator | 2026-01-01 01:31:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:56.791717 | orchestrator | 2026-01-01 01:31:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:59.851134 | orchestrator | 2026-01-01 01:31:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:31:59.853203 | orchestrator | 2026-01-01 01:31:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:31:59.853264 | orchestrator | 2026-01-01 01:31:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:02.901656 | orchestrator | 2026-01-01 01:32:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:02.906160 | orchestrator | 2026-01-01 01:32:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:02.906223 | orchestrator | 2026-01-01 01:32:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:05.960860 | orchestrator | 2026-01-01 01:32:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:05.963541 | orchestrator | 2026-01-01 01:32:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:05.963750 | orchestrator | 2026-01-01 01:32:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:09.011189 | orchestrator | 2026-01-01 01:32:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:09.011447 | orchestrator | 2026-01-01 01:32:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:09.011683 | orchestrator | 2026-01-01 01:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:12.061881 | orchestrator | 2026-01-01 01:32:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:12.063248 | orchestrator | 2026-01-01 01:32:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:12.063290 | orchestrator | 2026-01-01 01:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:15.111949 | orchestrator | 2026-01-01 01:32:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:15.113566 | orchestrator | 2026-01-01 01:32:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:15.113625 | orchestrator | 2026-01-01 01:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:18.167185 | orchestrator | 2026-01-01 01:32:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:18.168914 | orchestrator | 2026-01-01 01:32:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:18.168952 | orchestrator | 2026-01-01 01:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:21.218340 | orchestrator | 2026-01-01 01:32:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:21.219094 | orchestrator | 2026-01-01 01:32:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:21.219438 | orchestrator | 2026-01-01 01:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:24.271457 | orchestrator | 2026-01-01 01:32:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:24.272822 | orchestrator | 2026-01-01 01:32:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:24.272859 | orchestrator | 2026-01-01 01:32:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:27.324339 | orchestrator | 2026-01-01 01:32:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:27.326810 | orchestrator | 2026-01-01 01:32:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:27.326863 | orchestrator | 2026-01-01 01:32:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:30.378247 | orchestrator | 2026-01-01 01:32:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:30.379631 | orchestrator | 2026-01-01 01:32:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:30.379665 | orchestrator | 2026-01-01 01:32:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:33.427224 | orchestrator | 2026-01-01 01:32:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:33.429584 | orchestrator | 2026-01-01 01:32:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:33.429619 | orchestrator | 2026-01-01 01:32:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:36.487208 | orchestrator | 2026-01-01 01:32:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:36.489827 | orchestrator | 2026-01-01 01:32:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:36.490292 | orchestrator | 2026-01-01 01:32:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:39.543608 | orchestrator | 2026-01-01 01:32:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:39.546252 | orchestrator | 2026-01-01 01:32:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:39.546316 | orchestrator | 2026-01-01 01:32:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:42.589929 | orchestrator | 2026-01-01 01:32:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:42.592170 | orchestrator | 2026-01-01 01:32:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:42.592208 | orchestrator | 2026-01-01 01:32:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:45.643942 | orchestrator | 2026-01-01 01:32:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:45.645858 | orchestrator | 2026-01-01 01:32:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:45.645923 | orchestrator | 2026-01-01 01:32:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:48.698894 | orchestrator | 2026-01-01 01:32:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:48.700985 | orchestrator | 2026-01-01 01:32:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:48.701091 | orchestrator | 2026-01-01 01:32:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:51.746707 | orchestrator | 2026-01-01 01:32:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:51.748176 | orchestrator | 2026-01-01 01:32:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:51.748270 | orchestrator | 2026-01-01 01:32:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:54.798885 | orchestrator | 2026-01-01 01:32:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:54.800555 | orchestrator | 2026-01-01 01:32:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:54.800590 | orchestrator | 2026-01-01 01:32:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:57.843852 | orchestrator | 2026-01-01 01:32:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:32:57.845916 | orchestrator | 2026-01-01 01:32:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:32:57.846211 | orchestrator | 2026-01-01 01:32:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:00.899307 | orchestrator | 2026-01-01 01:33:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:00.901543 | orchestrator | 2026-01-01 01:33:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:00.901624 | orchestrator | 2026-01-01 01:33:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:03.954717 | orchestrator | 2026-01-01 01:33:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:03.955974 | orchestrator | 2026-01-01 01:33:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:03.956109 | orchestrator | 2026-01-01 01:33:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:07.011821 | orchestrator | 2026-01-01 01:33:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:07.014449 | orchestrator | 2026-01-01 01:33:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:07.014537 | orchestrator | 2026-01-01 01:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:10.061943 | orchestrator | 2026-01-01 01:33:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:10.062582 | orchestrator | 2026-01-01 01:33:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:10.062619 | orchestrator | 2026-01-01 01:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:13.106770 | orchestrator | 2026-01-01 01:33:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:13.108335 | orchestrator | 2026-01-01 01:33:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:13.108880 | orchestrator | 2026-01-01 01:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:16.158459 | orchestrator | 2026-01-01 01:33:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:16.159691 | orchestrator | 2026-01-01 01:33:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:16.159715 | orchestrator | 2026-01-01 01:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:19.211483 | orchestrator | 2026-01-01 01:33:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:19.213712 | orchestrator | 2026-01-01 01:33:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:19.213779 | orchestrator | 2026-01-01 01:33:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:22.262722 | orchestrator | 2026-01-01 01:33:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:22.265371 | orchestrator | 2026-01-01 01:33:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:22.265393 | orchestrator | 2026-01-01 01:33:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:25.310648 | orchestrator | 2026-01-01 01:33:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:25.314241 | orchestrator | 2026-01-01 01:33:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:25.314310 | orchestrator | 2026-01-01 01:33:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:28.360914 | orchestrator | 2026-01-01 01:33:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:28.362538 | orchestrator | 2026-01-01 01:33:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:28.362662 | orchestrator | 2026-01-01 01:33:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:31.411613 | orchestrator | 2026-01-01 01:33:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:31.412774 | orchestrator | 2026-01-01 01:33:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:31.412838 | orchestrator | 2026-01-01 01:33:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:34.465211 | orchestrator | 2026-01-01 01:33:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:34.466840 | orchestrator | 2026-01-01 01:33:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:34.466918 | orchestrator | 2026-01-01 01:33:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:37.523612 | orchestrator | 2026-01-01 01:33:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:37.525312 | orchestrator | 2026-01-01 01:33:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:37.525413 | orchestrator | 2026-01-01 01:33:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:40.567960 | orchestrator | 2026-01-01 01:33:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:40.569730 | orchestrator | 2026-01-01 01:33:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:40.569772 | orchestrator | 2026-01-01 01:33:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:43.615750 | orchestrator | 2026-01-01 01:33:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:43.618231 | orchestrator | 2026-01-01 01:33:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:43.618349 | orchestrator | 2026-01-01 01:33:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:46.674642 | orchestrator | 2026-01-01 01:33:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:46.676378 | orchestrator | 2026-01-01 01:33:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:46.676612 | orchestrator | 2026-01-01 01:33:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:49.721281 | orchestrator | 2026-01-01 01:33:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:49.722966 | orchestrator | 2026-01-01 01:33:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:49.722990 | orchestrator | 2026-01-01 01:33:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:52.769416 | orchestrator | 2026-01-01 01:33:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:52.771844 | orchestrator | 2026-01-01 01:33:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:52.771935 | orchestrator | 2026-01-01 01:33:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:55.819609 | orchestrator | 2026-01-01 01:33:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:55.823287 | orchestrator | 2026-01-01 01:33:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:55.823340 | orchestrator | 2026-01-01 01:33:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:58.868439 | orchestrator | 2026-01-01 01:33:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:33:58.869192 | orchestrator | 2026-01-01 01:33:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:33:58.869339 | orchestrator | 2026-01-01 01:33:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:01.914958 | orchestrator | 2026-01-01 01:34:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:01.917206 | orchestrator | 2026-01-01 01:34:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:01.917263 | orchestrator | 2026-01-01 01:34:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:04.970271 | orchestrator | 2026-01-01 01:34:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:04.973495 | orchestrator | 2026-01-01 01:34:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:04.973633 | orchestrator | 2026-01-01 01:34:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:08.021838 | orchestrator | 2026-01-01 01:34:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:08.024183 | orchestrator | 2026-01-01 01:34:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:08.024260 | orchestrator | 2026-01-01 01:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:11.079494 | orchestrator | 2026-01-01 01:34:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:11.081999 | orchestrator | 2026-01-01 01:34:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:11.082132 | orchestrator | 2026-01-01 01:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:14.133813 | orchestrator | 2026-01-01 01:34:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:14.135619 | orchestrator | 2026-01-01 01:34:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:14.135658 | orchestrator | 2026-01-01 01:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:17.188877 | orchestrator | 2026-01-01 01:34:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:17.190552 | orchestrator | 2026-01-01 01:34:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:17.190610 | orchestrator | 2026-01-01 01:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:20.243996 | orchestrator | 2026-01-01 01:34:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:20.245772 | orchestrator | 2026-01-01 01:34:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:20.245841 | orchestrator | 2026-01-01 01:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:23.304129 | orchestrator | 2026-01-01 01:34:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:23.307182 | orchestrator | 2026-01-01 01:34:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:23.307297 | orchestrator | 2026-01-01 01:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:26.360895 | orchestrator | 2026-01-01 01:34:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:26.363546 | orchestrator | 2026-01-01 01:34:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:26.363640 | orchestrator | 2026-01-01 01:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:29.410428 | orchestrator | 2026-01-01 01:34:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:29.411199 | orchestrator | 2026-01-01 01:34:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:29.411235 | orchestrator | 2026-01-01 01:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:32.462172 | orchestrator | 2026-01-01 01:34:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:32.466763 | orchestrator | 2026-01-01 01:34:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:32.466871 | orchestrator | 2026-01-01 01:34:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:35.511850 | orchestrator | 2026-01-01 01:34:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:35.513510 | orchestrator | 2026-01-01 01:34:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:35.513770 | orchestrator | 2026-01-01 01:34:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:38.561318 | orchestrator | 2026-01-01 01:34:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:38.562940 | orchestrator | 2026-01-01 01:34:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:38.562956 | orchestrator | 2026-01-01 01:34:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:41.612240 | orchestrator | 2026-01-01 01:34:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:41.614715 | orchestrator | 2026-01-01 01:34:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:41.614750 | orchestrator | 2026-01-01 01:34:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:44.670005 | orchestrator | 2026-01-01 01:34:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:44.672615 | orchestrator | 2026-01-01 01:34:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:44.672888 | orchestrator | 2026-01-01 01:34:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:47.724478 | orchestrator | 2026-01-01 01:34:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:47.725418 | orchestrator | 2026-01-01 01:34:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:47.725468 | orchestrator | 2026-01-01 01:34:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:50.772012 | orchestrator | 2026-01-01 01:34:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:50.774541 | orchestrator | 2026-01-01 01:34:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:50.775411 | orchestrator | 2026-01-01 01:34:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:53.820860 | orchestrator | 2026-01-01 01:34:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:53.822934 | orchestrator | 2026-01-01 01:34:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:53.823080 | orchestrator | 2026-01-01 01:34:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:56.871597 | orchestrator | 2026-01-01 01:34:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:56.873092 | orchestrator | 2026-01-01 01:34:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:56.873221 | orchestrator | 2026-01-01 01:34:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:59.925289 | orchestrator | 2026-01-01 01:34:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:34:59.927903 | orchestrator | 2026-01-01 01:34:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:34:59.927952 | orchestrator | 2026-01-01 01:34:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:02.978237 | orchestrator | 2026-01-01 01:35:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:02.979160 | orchestrator | 2026-01-01 01:35:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:02.979201 | orchestrator | 2026-01-01 01:35:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:06.024547 | orchestrator | 2026-01-01 01:35:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:06.027372 | orchestrator | 2026-01-01 01:35:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:06.027476 | orchestrator | 2026-01-01 01:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:09.085111 | orchestrator | 2026-01-01 01:35:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:09.087727 | orchestrator | 2026-01-01 01:35:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:09.087854 | orchestrator | 2026-01-01 01:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:12.127757 | orchestrator | 2026-01-01 01:35:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:12.130214 | orchestrator | 2026-01-01 01:35:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:12.130248 | orchestrator | 2026-01-01 01:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:15.174177 | orchestrator | 2026-01-01 01:35:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:15.175290 | orchestrator | 2026-01-01 01:35:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:15.175325 | orchestrator | 2026-01-01 01:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:18.226240 | orchestrator | 2026-01-01 01:35:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:18.227893 | orchestrator | 2026-01-01 01:35:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:18.228102 | orchestrator | 2026-01-01 01:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:21.281083 | orchestrator | 2026-01-01 01:35:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:21.282719 | orchestrator | 2026-01-01 01:35:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:21.282820 | orchestrator | 2026-01-01 01:35:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:24.333397 | orchestrator | 2026-01-01 01:35:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:24.335138 | orchestrator | 2026-01-01 01:35:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:24.335252 | orchestrator | 2026-01-01 01:35:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:27.381715 | orchestrator | 2026-01-01 01:35:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:27.383505 | orchestrator | 2026-01-01 01:35:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:27.383555 | orchestrator | 2026-01-01 01:35:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:30.430171 | orchestrator | 2026-01-01 01:35:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:30.432860 | orchestrator | 2026-01-01 01:35:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:30.432909 | orchestrator | 2026-01-01 01:35:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:33.484569 | orchestrator | 2026-01-01 01:35:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:33.484679 | orchestrator | 2026-01-01 01:35:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:33.484698 | orchestrator | 2026-01-01 01:35:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:36.533668 | orchestrator | 2026-01-01 01:35:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:36.536842 | orchestrator | 2026-01-01 01:35:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:36.536897 | orchestrator | 2026-01-01 01:35:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:39.586859 | orchestrator | 2026-01-01 01:35:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:39.589530 | orchestrator | 2026-01-01 01:35:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:39.589984 | orchestrator | 2026-01-01 01:35:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:42.645325 | orchestrator | 2026-01-01 01:35:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:42.649748 | orchestrator | 2026-01-01 01:35:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:42.649801 | orchestrator | 2026-01-01 01:35:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:45.703379 | orchestrator | 2026-01-01 01:35:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:45.705601 | orchestrator | 2026-01-01 01:35:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:45.705655 | orchestrator | 2026-01-01 01:35:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:48.754466 | orchestrator | 2026-01-01 01:35:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:48.756969 | orchestrator | 2026-01-01 01:35:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:48.757275 | orchestrator | 2026-01-01 01:35:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:51.803358 | orchestrator | 2026-01-01 01:35:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:51.804379 | orchestrator | 2026-01-01 01:35:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:51.804418 | orchestrator | 2026-01-01 01:35:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:54.854263 | orchestrator | 2026-01-01 01:35:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:54.855905 | orchestrator | 2026-01-01 01:35:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:54.855989 | orchestrator | 2026-01-01 01:35:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:57.902840 | orchestrator | 2026-01-01 01:35:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:35:57.907706 | orchestrator | 2026-01-01 01:35:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:35:57.907757 | orchestrator | 2026-01-01 01:35:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:00.941944 | orchestrator | 2026-01-01 01:36:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:00.943527 | orchestrator | 2026-01-01 01:36:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:00.943567 | orchestrator | 2026-01-01 01:36:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:03.995019 | orchestrator | 2026-01-01 01:36:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:03.996138 | orchestrator | 2026-01-01 01:36:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:03.996169 | orchestrator | 2026-01-01 01:36:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:07.050782 | orchestrator | 2026-01-01 01:36:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:07.051969 | orchestrator | 2026-01-01 01:36:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:07.052008 | orchestrator | 2026-01-01 01:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:10.101206 | orchestrator | 2026-01-01 01:36:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:10.102197 | orchestrator | 2026-01-01 01:36:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:10.102286 | orchestrator | 2026-01-01 01:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:13.153919 | orchestrator | 2026-01-01 01:36:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:13.155919 | orchestrator | 2026-01-01 01:36:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:13.155964 | orchestrator | 2026-01-01 01:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:16.203370 | orchestrator | 2026-01-01 01:36:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:16.204607 | orchestrator | 2026-01-01 01:36:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:16.204729 | orchestrator | 2026-01-01 01:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:19.258183 | orchestrator | 2026-01-01 01:36:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:19.261608 | orchestrator | 2026-01-01 01:36:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:19.261696 | orchestrator | 2026-01-01 01:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:22.309307 | orchestrator | 2026-01-01 01:36:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:22.310972 | orchestrator | 2026-01-01 01:36:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:22.311017 | orchestrator | 2026-01-01 01:36:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:25.366903 | orchestrator | 2026-01-01 01:36:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:25.368144 | orchestrator | 2026-01-01 01:36:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:25.368273 | orchestrator | 2026-01-01 01:36:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:28.416106 | orchestrator | 2026-01-01 01:36:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:28.417667 | orchestrator | 2026-01-01 01:36:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:28.417721 | orchestrator | 2026-01-01 01:36:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:31.469239 | orchestrator | 2026-01-01 01:36:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:31.470928 | orchestrator | 2026-01-01 01:36:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:31.471145 | orchestrator | 2026-01-01 01:36:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:34.520450 | orchestrator | 2026-01-01 01:36:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:34.522000 | orchestrator | 2026-01-01 01:36:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:34.522211 | orchestrator | 2026-01-01 01:36:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:37.566005 | orchestrator | 2026-01-01 01:36:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:37.568640 | orchestrator | 2026-01-01 01:36:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:37.568828 | orchestrator | 2026-01-01 01:36:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:40.618885 | orchestrator | 2026-01-01 01:36:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:40.621272 | orchestrator | 2026-01-01 01:36:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:40.621357 | orchestrator | 2026-01-01 01:36:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:43.667299 | orchestrator | 2026-01-01 01:36:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:43.669119 | orchestrator | 2026-01-01 01:36:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:43.669168 | orchestrator | 2026-01-01 01:36:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:46.715010 | orchestrator | 2026-01-01 01:36:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:46.716364 | orchestrator | 2026-01-01 01:36:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:46.716409 | orchestrator | 2026-01-01 01:36:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:49.762869 | orchestrator | 2026-01-01 01:36:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:49.764567 | orchestrator | 2026-01-01 01:36:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:49.764648 | orchestrator | 2026-01-01 01:36:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:52.815410 | orchestrator | 2026-01-01 01:36:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:52.817859 | orchestrator | 2026-01-01 01:36:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:52.817944 | orchestrator | 2026-01-01 01:36:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:55.869377 | orchestrator | 2026-01-01 01:36:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:55.870873 | orchestrator | 2026-01-01 01:36:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:55.870922 | orchestrator | 2026-01-01 01:36:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:58.923901 | orchestrator | 2026-01-01 01:36:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:36:58.928124 | orchestrator | 2026-01-01 01:36:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:36:58.928189 | orchestrator | 2026-01-01 01:36:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:01.966135 | orchestrator | 2026-01-01 01:37:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:01.967391 | orchestrator | 2026-01-01 01:37:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:01.967426 | orchestrator | 2026-01-01 01:37:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:05.016257 | orchestrator | 2026-01-01 01:37:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:05.017626 | orchestrator | 2026-01-01 01:37:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:05.017658 | orchestrator | 2026-01-01 01:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:08.067384 | orchestrator | 2026-01-01 01:37:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:08.068989 | orchestrator | 2026-01-01 01:37:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:08.069039 | orchestrator | 2026-01-01 01:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:11.108164 | orchestrator | 2026-01-01 01:37:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:11.110447 | orchestrator | 2026-01-01 01:37:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:11.110498 | orchestrator | 2026-01-01 01:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:14.158230 | orchestrator | 2026-01-01 01:37:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:14.159252 | orchestrator | 2026-01-01 01:37:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:14.159285 | orchestrator | 2026-01-01 01:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:17.213614 | orchestrator | 2026-01-01 01:37:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:17.215160 | orchestrator | 2026-01-01 01:37:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:17.215232 | orchestrator | 2026-01-01 01:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:20.258914 | orchestrator | 2026-01-01 01:37:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:20.260611 | orchestrator | 2026-01-01 01:37:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:20.260650 | orchestrator | 2026-01-01 01:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:23.303755 | orchestrator | 2026-01-01 01:37:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:23.305709 | orchestrator | 2026-01-01 01:37:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:23.305782 | orchestrator | 2026-01-01 01:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:26.355067 | orchestrator | 2026-01-01 01:37:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:26.357859 | orchestrator | 2026-01-01 01:37:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:26.357930 | orchestrator | 2026-01-01 01:37:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:29.407681 | orchestrator | 2026-01-01 01:37:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:29.408806 | orchestrator | 2026-01-01 01:37:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:29.408897 | orchestrator | 2026-01-01 01:37:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:32.461756 | orchestrator | 2026-01-01 01:37:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:32.463353 | orchestrator | 2026-01-01 01:37:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:32.463399 | orchestrator | 2026-01-01 01:37:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:35.513714 | orchestrator | 2026-01-01 01:37:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:35.516423 | orchestrator | 2026-01-01 01:37:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:35.516542 | orchestrator | 2026-01-01 01:37:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:38.565349 | orchestrator | 2026-01-01 01:37:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:38.567236 | orchestrator | 2026-01-01 01:37:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:38.567372 | orchestrator | 2026-01-01 01:37:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:41.618890 | orchestrator | 2026-01-01 01:37:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:41.620631 | orchestrator | 2026-01-01 01:37:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:41.620675 | orchestrator | 2026-01-01 01:37:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:44.675309 | orchestrator | 2026-01-01 01:37:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:44.676149 | orchestrator | 2026-01-01 01:37:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:44.676204 | orchestrator | 2026-01-01 01:37:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:47.725258 | orchestrator | 2026-01-01 01:37:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:47.727460 | orchestrator | 2026-01-01 01:37:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:47.727513 | orchestrator | 2026-01-01 01:37:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:50.774899 | orchestrator | 2026-01-01 01:37:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:50.775739 | orchestrator | 2026-01-01 01:37:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:50.776241 | orchestrator | 2026-01-01 01:37:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:53.823240 | orchestrator | 2026-01-01 01:37:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:53.826134 | orchestrator | 2026-01-01 01:37:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:53.826190 | orchestrator | 2026-01-01 01:37:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:56.872509 | orchestrator | 2026-01-01 01:37:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:56.874685 | orchestrator | 2026-01-01 01:37:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:56.875057 | orchestrator | 2026-01-01 01:37:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:59.920473 | orchestrator | 2026-01-01 01:37:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:37:59.922375 | orchestrator | 2026-01-01 01:37:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:37:59.922421 | orchestrator | 2026-01-01 01:37:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:02.969753 | orchestrator | 2026-01-01 01:38:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:02.973229 | orchestrator | 2026-01-01 01:38:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:02.973290 | orchestrator | 2026-01-01 01:38:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:06.029892 | orchestrator | 2026-01-01 01:38:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:06.035023 | orchestrator | 2026-01-01 01:38:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:06.035116 | orchestrator | 2026-01-01 01:38:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:09.086421 | orchestrator | 2026-01-01 01:38:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:09.090079 | orchestrator | 2026-01-01 01:38:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:09.090321 | orchestrator | 2026-01-01 01:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:12.131388 | orchestrator | 2026-01-01 01:38:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:12.133651 | orchestrator | 2026-01-01 01:38:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:12.133711 | orchestrator | 2026-01-01 01:38:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:15.178863 | orchestrator | 2026-01-01 01:38:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:15.180206 | orchestrator | 2026-01-01 01:38:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:15.180348 | orchestrator | 2026-01-01 01:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:18.231411 | orchestrator | 2026-01-01 01:38:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:18.233289 | orchestrator | 2026-01-01 01:38:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:18.233316 | orchestrator | 2026-01-01 01:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:21.284120 | orchestrator | 2026-01-01 01:38:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:21.285952 | orchestrator | 2026-01-01 01:38:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:21.286069 | orchestrator | 2026-01-01 01:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:24.338353 | orchestrator | 2026-01-01 01:38:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:24.339208 | orchestrator | 2026-01-01 01:38:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:24.339767 | orchestrator | 2026-01-01 01:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:27.387764 | orchestrator | 2026-01-01 01:38:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:27.390404 | orchestrator | 2026-01-01 01:38:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:27.390428 | orchestrator | 2026-01-01 01:38:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:30.444810 | orchestrator | 2026-01-01 01:38:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:30.448411 | orchestrator | 2026-01-01 01:38:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:30.448464 | orchestrator | 2026-01-01 01:38:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:33.500750 | orchestrator | 2026-01-01 01:38:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:33.502432 | orchestrator | 2026-01-01 01:38:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:33.502512 | orchestrator | 2026-01-01 01:38:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:36.547636 | orchestrator | 2026-01-01 01:38:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:36.549509 | orchestrator | 2026-01-01 01:38:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:36.549569 | orchestrator | 2026-01-01 01:38:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:39.598739 | orchestrator | 2026-01-01 01:38:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:39.600453 | orchestrator | 2026-01-01 01:38:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:39.600483 | orchestrator | 2026-01-01 01:38:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:42.650838 | orchestrator | 2026-01-01 01:38:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:42.651952 | orchestrator | 2026-01-01 01:38:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:42.651982 | orchestrator | 2026-01-01 01:38:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:45.704253 | orchestrator | 2026-01-01 01:38:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:45.706210 | orchestrator | 2026-01-01 01:38:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:45.706457 | orchestrator | 2026-01-01 01:38:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:48.757987 | orchestrator | 2026-01-01 01:38:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:48.760043 | orchestrator | 2026-01-01 01:38:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:48.760186 | orchestrator | 2026-01-01 01:38:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:51.805041 | orchestrator | 2026-01-01 01:38:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:51.805547 | orchestrator | 2026-01-01 01:38:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:51.805597 | orchestrator | 2026-01-01 01:38:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:54.844830 | orchestrator | 2026-01-01 01:38:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:54.846477 | orchestrator | 2026-01-01 01:38:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:54.846533 | orchestrator | 2026-01-01 01:38:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:57.899844 | orchestrator | 2026-01-01 01:38:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:38:57.901942 | orchestrator | 2026-01-01 01:38:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:38:57.901993 | orchestrator | 2026-01-01 01:38:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:00.957735 | orchestrator | 2026-01-01 01:39:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:00.961333 | orchestrator | 2026-01-01 01:39:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:00.961395 | orchestrator | 2026-01-01 01:39:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:04.010898 | orchestrator | 2026-01-01 01:39:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:04.014128 | orchestrator | 2026-01-01 01:39:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:04.014399 | orchestrator | 2026-01-01 01:39:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:07.064167 | orchestrator | 2026-01-01 01:39:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:07.066226 | orchestrator | 2026-01-01 01:39:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:07.066327 | orchestrator | 2026-01-01 01:39:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:10.119480 | orchestrator | 2026-01-01 01:39:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:10.120305 | orchestrator | 2026-01-01 01:39:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:10.120333 | orchestrator | 2026-01-01 01:39:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:13.165639 | orchestrator | 2026-01-01 01:39:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:13.166990 | orchestrator | 2026-01-01 01:39:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:13.167373 | orchestrator | 2026-01-01 01:39:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:16.219048 | orchestrator | 2026-01-01 01:39:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:16.221514 | orchestrator | 2026-01-01 01:39:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:16.221642 | orchestrator | 2026-01-01 01:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:19.273563 | orchestrator | 2026-01-01 01:39:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:19.275080 | orchestrator | 2026-01-01 01:39:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:19.275254 | orchestrator | 2026-01-01 01:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:22.317622 | orchestrator | 2026-01-01 01:39:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:22.319881 | orchestrator | 2026-01-01 01:39:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:22.319928 | orchestrator | 2026-01-01 01:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:25.365803 | orchestrator | 2026-01-01 01:39:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:25.662778 | orchestrator | 2026-01-01 01:39:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:25.662846 | orchestrator | 2026-01-01 01:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:28.414269 | orchestrator | 2026-01-01 01:39:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:28.416077 | orchestrator | 2026-01-01 01:39:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:28.416496 | orchestrator | 2026-01-01 01:39:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:31.468931 | orchestrator | 2026-01-01 01:39:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:31.471679 | orchestrator | 2026-01-01 01:39:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:31.471743 | orchestrator | 2026-01-01 01:39:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:34.521055 | orchestrator | 2026-01-01 01:39:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:34.524012 | orchestrator | 2026-01-01 01:39:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:34.524085 | orchestrator | 2026-01-01 01:39:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:37.570475 | orchestrator | 2026-01-01 01:39:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:37.573038 | orchestrator | 2026-01-01 01:39:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:37.573078 | orchestrator | 2026-01-01 01:39:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:40.623067 | orchestrator | 2026-01-01 01:39:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:40.625309 | orchestrator | 2026-01-01 01:39:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:40.982534 | orchestrator | 2026-01-01 01:39:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:43.667868 | orchestrator | 2026-01-01 01:39:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:43.670496 | orchestrator | 2026-01-01 01:39:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:43.670579 | orchestrator | 2026-01-01 01:39:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:46.726098 | orchestrator | 2026-01-01 01:39:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:46.726617 | orchestrator | 2026-01-01 01:39:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:46.726652 | orchestrator | 2026-01-01 01:39:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:49.777799 | orchestrator | 2026-01-01 01:39:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:49.778979 | orchestrator | 2026-01-01 01:39:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:49.779030 | orchestrator | 2026-01-01 01:39:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:52.821637 | orchestrator | 2026-01-01 01:39:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:52.823443 | orchestrator | 2026-01-01 01:39:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:52.823499 | orchestrator | 2026-01-01 01:39:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:55.874540 | orchestrator | 2026-01-01 01:39:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:55.876681 | orchestrator | 2026-01-01 01:39:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:55.876720 | orchestrator | 2026-01-01 01:39:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:58.931035 | orchestrator | 2026-01-01 01:39:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:39:58.932696 | orchestrator | 2026-01-01 01:39:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:39:58.932834 | orchestrator | 2026-01-01 01:39:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:01.993303 | orchestrator | 2026-01-01 01:40:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:01.997155 | orchestrator | 2026-01-01 01:40:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:01.997195 | orchestrator | 2026-01-01 01:40:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:05.052320 | orchestrator | 2026-01-01 01:40:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:05.053523 | orchestrator | 2026-01-01 01:40:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:05.053557 | orchestrator | 2026-01-01 01:40:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:08.096581 | orchestrator | 2026-01-01 01:40:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:08.096719 | orchestrator | 2026-01-01 01:40:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:08.096736 | orchestrator | 2026-01-01 01:40:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:11.141242 | orchestrator | 2026-01-01 01:40:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:11.141343 | orchestrator | 2026-01-01 01:40:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:11.141358 | orchestrator | 2026-01-01 01:40:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:14.197483 | orchestrator | 2026-01-01 01:40:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:14.199196 | orchestrator | 2026-01-01 01:40:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:14.199245 | orchestrator | 2026-01-01 01:40:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:17.252644 | orchestrator | 2026-01-01 01:40:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:17.253795 | orchestrator | 2026-01-01 01:40:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:17.253857 | orchestrator | 2026-01-01 01:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:20.304247 | orchestrator | 2026-01-01 01:40:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:20.306573 | orchestrator | 2026-01-01 01:40:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:20.306617 | orchestrator | 2026-01-01 01:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:23.354991 | orchestrator | 2026-01-01 01:40:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:23.356824 | orchestrator | 2026-01-01 01:40:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:23.356854 | orchestrator | 2026-01-01 01:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:26.403056 | orchestrator | 2026-01-01 01:40:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:26.404422 | orchestrator | 2026-01-01 01:40:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:26.404464 | orchestrator | 2026-01-01 01:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:29.453467 | orchestrator | 2026-01-01 01:40:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:29.454683 | orchestrator | 2026-01-01 01:40:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:29.454730 | orchestrator | 2026-01-01 01:40:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:32.513426 | orchestrator | 2026-01-01 01:40:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:32.515208 | orchestrator | 2026-01-01 01:40:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:32.515283 | orchestrator | 2026-01-01 01:40:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:35.560169 | orchestrator | 2026-01-01 01:40:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:35.560796 | orchestrator | 2026-01-01 01:40:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:35.560898 | orchestrator | 2026-01-01 01:40:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:38.606287 | orchestrator | 2026-01-01 01:40:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:38.607582 | orchestrator | 2026-01-01 01:40:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:38.607643 | orchestrator | 2026-01-01 01:40:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:41.648041 | orchestrator | 2026-01-01 01:40:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:41.649995 | orchestrator | 2026-01-01 01:40:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:41.650160 | orchestrator | 2026-01-01 01:40:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:44.695154 | orchestrator | 2026-01-01 01:40:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:44.696257 | orchestrator | 2026-01-01 01:40:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:44.696305 | orchestrator | 2026-01-01 01:40:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:47.750441 | orchestrator | 2026-01-01 01:40:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:47.751491 | orchestrator | 2026-01-01 01:40:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:47.751512 | orchestrator | 2026-01-01 01:40:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:50.797992 | orchestrator | 2026-01-01 01:40:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:50.800007 | orchestrator | 2026-01-01 01:40:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:50.800052 | orchestrator | 2026-01-01 01:40:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:53.848500 | orchestrator | 2026-01-01 01:40:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:53.850776 | orchestrator | 2026-01-01 01:40:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:53.850834 | orchestrator | 2026-01-01 01:40:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:56.906515 | orchestrator | 2026-01-01 01:40:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:56.907985 | orchestrator | 2026-01-01 01:40:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:56.908279 | orchestrator | 2026-01-01 01:40:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:59.957889 | orchestrator | 2026-01-01 01:40:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:40:59.959953 | orchestrator | 2026-01-01 01:40:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:40:59.960033 | orchestrator | 2026-01-01 01:40:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:03.014489 | orchestrator | 2026-01-01 01:41:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:03.016565 | orchestrator | 2026-01-01 01:41:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:03.016632 | orchestrator | 2026-01-01 01:41:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:06.068565 | orchestrator | 2026-01-01 01:41:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:06.070906 | orchestrator | 2026-01-01 01:41:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:06.070945 | orchestrator | 2026-01-01 01:41:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:09.115931 | orchestrator | 2026-01-01 01:41:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:09.117419 | orchestrator | 2026-01-01 01:41:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:09.117461 | orchestrator | 2026-01-01 01:41:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:12.167008 | orchestrator | 2026-01-01 01:41:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:12.167849 | orchestrator | 2026-01-01 01:41:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:12.167923 | orchestrator | 2026-01-01 01:41:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:15.214646 | orchestrator | 2026-01-01 01:41:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:15.215741 | orchestrator | 2026-01-01 01:41:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:15.215764 | orchestrator | 2026-01-01 01:41:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:18.267012 | orchestrator | 2026-01-01 01:41:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:18.268092 | orchestrator | 2026-01-01 01:41:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:18.268120 | orchestrator | 2026-01-01 01:41:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:21.314915 | orchestrator | 2026-01-01 01:41:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:21.317088 | orchestrator | 2026-01-01 01:41:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:21.317130 | orchestrator | 2026-01-01 01:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:24.368675 | orchestrator | 2026-01-01 01:41:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:24.369297 | orchestrator | 2026-01-01 01:41:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:24.369357 | orchestrator | 2026-01-01 01:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:27.410421 | orchestrator | 2026-01-01 01:41:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:27.411894 | orchestrator | 2026-01-01 01:41:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:27.411941 | orchestrator | 2026-01-01 01:41:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:30.455359 | orchestrator | 2026-01-01 01:41:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:30.458062 | orchestrator | 2026-01-01 01:41:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:30.458094 | orchestrator | 2026-01-01 01:41:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:33.501429 | orchestrator | 2026-01-01 01:41:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:33.502955 | orchestrator | 2026-01-01 01:41:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:33.503018 | orchestrator | 2026-01-01 01:41:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:36.548124 | orchestrator | 2026-01-01 01:41:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:36.548472 | orchestrator | 2026-01-01 01:41:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:36.548511 | orchestrator | 2026-01-01 01:41:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:39.596848 | orchestrator | 2026-01-01 01:41:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:39.598202 | orchestrator | 2026-01-01 01:41:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:39.598298 | orchestrator | 2026-01-01 01:41:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:42.654319 | orchestrator | 2026-01-01 01:41:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:42.655872 | orchestrator | 2026-01-01 01:41:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:42.655920 | orchestrator | 2026-01-01 01:41:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:45.704707 | orchestrator | 2026-01-01 01:41:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:45.706502 | orchestrator | 2026-01-01 01:41:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:45.706559 | orchestrator | 2026-01-01 01:41:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:48.768687 | orchestrator | 2026-01-01 01:41:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:48.769471 | orchestrator | 2026-01-01 01:41:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:48.769760 | orchestrator | 2026-01-01 01:41:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:51.819790 | orchestrator | 2026-01-01 01:41:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:51.823394 | orchestrator | 2026-01-01 01:41:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:51.823584 | orchestrator | 2026-01-01 01:41:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:54.875964 | orchestrator | 2026-01-01 01:41:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:54.878365 | orchestrator | 2026-01-01 01:41:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:54.878405 | orchestrator | 2026-01-01 01:41:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:57.927250 | orchestrator | 2026-01-01 01:41:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:41:57.929852 | orchestrator | 2026-01-01 01:41:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:41:57.929898 | orchestrator | 2026-01-01 01:41:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:00.978299 | orchestrator | 2026-01-01 01:42:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:00.979332 | orchestrator | 2026-01-01 01:42:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:00.979416 | orchestrator | 2026-01-01 01:42:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:04.026912 | orchestrator | 2026-01-01 01:42:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:04.029084 | orchestrator | 2026-01-01 01:42:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:04.029107 | orchestrator | 2026-01-01 01:42:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:07.078245 | orchestrator | 2026-01-01 01:42:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:07.079970 | orchestrator | 2026-01-01 01:42:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:07.080373 | orchestrator | 2026-01-01 01:42:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:10.124667 | orchestrator | 2026-01-01 01:42:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:10.125764 | orchestrator | 2026-01-01 01:42:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:10.125997 | orchestrator | 2026-01-01 01:42:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:13.179827 | orchestrator | 2026-01-01 01:42:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:13.181044 | orchestrator | 2026-01-01 01:42:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:13.181127 | orchestrator | 2026-01-01 01:42:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:16.231134 | orchestrator | 2026-01-01 01:42:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:16.232052 | orchestrator | 2026-01-01 01:42:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:16.232082 | orchestrator | 2026-01-01 01:42:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:19.282123 | orchestrator | 2026-01-01 01:42:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:19.283604 | orchestrator | 2026-01-01 01:42:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:19.283857 | orchestrator | 2026-01-01 01:42:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:22.325348 | orchestrator | 2026-01-01 01:42:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:22.327633 | orchestrator | 2026-01-01 01:42:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:22.327702 | orchestrator | 2026-01-01 01:42:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:25.379847 | orchestrator | 2026-01-01 01:42:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:25.381633 | orchestrator | 2026-01-01 01:42:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:25.382057 | orchestrator | 2026-01-01 01:42:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:28.431875 | orchestrator | 2026-01-01 01:42:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:28.433590 | orchestrator | 2026-01-01 01:42:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:28.433785 | orchestrator | 2026-01-01 01:42:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:31.478325 | orchestrator | 2026-01-01 01:42:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:31.480637 | orchestrator | 2026-01-01 01:42:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:31.480785 | orchestrator | 2026-01-01 01:42:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:34.529618 | orchestrator | 2026-01-01 01:42:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:34.531337 | orchestrator | 2026-01-01 01:42:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:34.531420 | orchestrator | 2026-01-01 01:42:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:37.566556 | orchestrator | 2026-01-01 01:42:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:37.568964 | orchestrator | 2026-01-01 01:42:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:37.569016 | orchestrator | 2026-01-01 01:42:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:40.612908 | orchestrator | 2026-01-01 01:42:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:40.614396 | orchestrator | 2026-01-01 01:42:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:40.614511 | orchestrator | 2026-01-01 01:42:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:43.666659 | orchestrator | 2026-01-01 01:42:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:43.668927 | orchestrator | 2026-01-01 01:42:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:43.668982 | orchestrator | 2026-01-01 01:42:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:46.715713 | orchestrator | 2026-01-01 01:42:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:46.718968 | orchestrator | 2026-01-01 01:42:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:46.719025 | orchestrator | 2026-01-01 01:42:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:49.767949 | orchestrator | 2026-01-01 01:42:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:49.770328 | orchestrator | 2026-01-01 01:42:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:49.770390 | orchestrator | 2026-01-01 01:42:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:52.820116 | orchestrator | 2026-01-01 01:42:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:52.821678 | orchestrator | 2026-01-01 01:42:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:52.821706 | orchestrator | 2026-01-01 01:42:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:55.868996 | orchestrator | 2026-01-01 01:42:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:55.870243 | orchestrator | 2026-01-01 01:42:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:55.870303 | orchestrator | 2026-01-01 01:42:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:58.916293 | orchestrator | 2026-01-01 01:42:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:42:58.918348 | orchestrator | 2026-01-01 01:42:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:42:58.918377 | orchestrator | 2026-01-01 01:42:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:01.968064 | orchestrator | 2026-01-01 01:43:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:01.969959 | orchestrator | 2026-01-01 01:43:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:01.969998 | orchestrator | 2026-01-01 01:43:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:05.020410 | orchestrator | 2026-01-01 01:43:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:05.022321 | orchestrator | 2026-01-01 01:43:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:05.022355 | orchestrator | 2026-01-01 01:43:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:08.070155 | orchestrator | 2026-01-01 01:43:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:08.071643 | orchestrator | 2026-01-01 01:43:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:08.072007 | orchestrator | 2026-01-01 01:43:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:11.114678 | orchestrator | 2026-01-01 01:43:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:11.116529 | orchestrator | 2026-01-01 01:43:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:11.116563 | orchestrator | 2026-01-01 01:43:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:14.156041 | orchestrator | 2026-01-01 01:43:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:14.157687 | orchestrator | 2026-01-01 01:43:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:14.157726 | orchestrator | 2026-01-01 01:43:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:17.214239 | orchestrator | 2026-01-01 01:43:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:17.216469 | orchestrator | 2026-01-01 01:43:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:17.216537 | orchestrator | 2026-01-01 01:43:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:20.266551 | orchestrator | 2026-01-01 01:43:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:20.267990 | orchestrator | 2026-01-01 01:43:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:20.268020 | orchestrator | 2026-01-01 01:43:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:23.318362 | orchestrator | 2026-01-01 01:43:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:23.322108 | orchestrator | 2026-01-01 01:43:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:23.322213 | orchestrator | 2026-01-01 01:43:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:26.376492 | orchestrator | 2026-01-01 01:43:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:26.378407 | orchestrator | 2026-01-01 01:43:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:26.378489 | orchestrator | 2026-01-01 01:43:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:29.423102 | orchestrator | 2026-01-01 01:43:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:29.423883 | orchestrator | 2026-01-01 01:43:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:29.423924 | orchestrator | 2026-01-01 01:43:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:32.473080 | orchestrator | 2026-01-01 01:43:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:32.474782 | orchestrator | 2026-01-01 01:43:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:32.474805 | orchestrator | 2026-01-01 01:43:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:35.528981 | orchestrator | 2026-01-01 01:43:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:35.531304 | orchestrator | 2026-01-01 01:43:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:35.531369 | orchestrator | 2026-01-01 01:43:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:38.581223 | orchestrator | 2026-01-01 01:43:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:38.582906 | orchestrator | 2026-01-01 01:43:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:38.582946 | orchestrator | 2026-01-01 01:43:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:41.633813 | orchestrator | 2026-01-01 01:43:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:41.636738 | orchestrator | 2026-01-01 01:43:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:41.636850 | orchestrator | 2026-01-01 01:43:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:44.682771 | orchestrator | 2026-01-01 01:43:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:44.686800 | orchestrator | 2026-01-01 01:43:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:44.686843 | orchestrator | 2026-01-01 01:43:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:47.731438 | orchestrator | 2026-01-01 01:43:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:47.733326 | orchestrator | 2026-01-01 01:43:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:47.733369 | orchestrator | 2026-01-01 01:43:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:50.787254 | orchestrator | 2026-01-01 01:43:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:50.788160 | orchestrator | 2026-01-01 01:43:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:50.788197 | orchestrator | 2026-01-01 01:43:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:53.834152 | orchestrator | 2026-01-01 01:43:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:53.835658 | orchestrator | 2026-01-01 01:43:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:53.835751 | orchestrator | 2026-01-01 01:43:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:56.883038 | orchestrator | 2026-01-01 01:43:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:56.885151 | orchestrator | 2026-01-01 01:43:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:56.885793 | orchestrator | 2026-01-01 01:43:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:59.930889 | orchestrator | 2026-01-01 01:43:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:43:59.933388 | orchestrator | 2026-01-01 01:43:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:43:59.933425 | orchestrator | 2026-01-01 01:43:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:02.979594 | orchestrator | 2026-01-01 01:44:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:02.982462 | orchestrator | 2026-01-01 01:44:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:02.982542 | orchestrator | 2026-01-01 01:44:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:06.030781 | orchestrator | 2026-01-01 01:44:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:06.034676 | orchestrator | 2026-01-01 01:44:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:06.034731 | orchestrator | 2026-01-01 01:44:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:09.077071 | orchestrator | 2026-01-01 01:44:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:09.078366 | orchestrator | 2026-01-01 01:44:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:09.078454 | orchestrator | 2026-01-01 01:44:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:12.123619 | orchestrator | 2026-01-01 01:44:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:12.127044 | orchestrator | 2026-01-01 01:44:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:12.127127 | orchestrator | 2026-01-01 01:44:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:15.178354 | orchestrator | 2026-01-01 01:44:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:15.180218 | orchestrator | 2026-01-01 01:44:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:15.180468 | orchestrator | 2026-01-01 01:44:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:18.232922 | orchestrator | 2026-01-01 01:44:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:18.233973 | orchestrator | 2026-01-01 01:44:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:18.234061 | orchestrator | 2026-01-01 01:44:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:21.284289 | orchestrator | 2026-01-01 01:44:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:21.285896 | orchestrator | 2026-01-01 01:44:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:21.286008 | orchestrator | 2026-01-01 01:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:24.339007 | orchestrator | 2026-01-01 01:44:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:24.340004 | orchestrator | 2026-01-01 01:44:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:24.340037 | orchestrator | 2026-01-01 01:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:27.383615 | orchestrator | 2026-01-01 01:44:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:27.386260 | orchestrator | 2026-01-01 01:44:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:27.386388 | orchestrator | 2026-01-01 01:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:30.429749 | orchestrator | 2026-01-01 01:44:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:30.431649 | orchestrator | 2026-01-01 01:44:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:30.431729 | orchestrator | 2026-01-01 01:44:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:33.485283 | orchestrator | 2026-01-01 01:44:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:33.488153 | orchestrator | 2026-01-01 01:44:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:33.488210 | orchestrator | 2026-01-01 01:44:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:36.535801 | orchestrator | 2026-01-01 01:44:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:36.538617 | orchestrator | 2026-01-01 01:44:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:36.538713 | orchestrator | 2026-01-01 01:44:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:39.583894 | orchestrator | 2026-01-01 01:44:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:39.584472 | orchestrator | 2026-01-01 01:44:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:39.584518 | orchestrator | 2026-01-01 01:44:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:42.633108 | orchestrator | 2026-01-01 01:44:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:42.634533 | orchestrator | 2026-01-01 01:44:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:42.634568 | orchestrator | 2026-01-01 01:44:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:45.688160 | orchestrator | 2026-01-01 01:44:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:45.690740 | orchestrator | 2026-01-01 01:44:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:45.690874 | orchestrator | 2026-01-01 01:44:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:48.737604 | orchestrator | 2026-01-01 01:44:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:48.740284 | orchestrator | 2026-01-01 01:44:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:48.740343 | orchestrator | 2026-01-01 01:44:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:51.792405 | orchestrator | 2026-01-01 01:44:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:51.793770 | orchestrator | 2026-01-01 01:44:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:51.793815 | orchestrator | 2026-01-01 01:44:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:54.847294 | orchestrator | 2026-01-01 01:44:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:54.849531 | orchestrator | 2026-01-01 01:44:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:54.849584 | orchestrator | 2026-01-01 01:44:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:57.896145 | orchestrator | 2026-01-01 01:44:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:44:57.899333 | orchestrator | 2026-01-01 01:44:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:44:57.899394 | orchestrator | 2026-01-01 01:44:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:00.931825 | orchestrator | 2026-01-01 01:45:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:00.934494 | orchestrator | 2026-01-01 01:45:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:00.934533 | orchestrator | 2026-01-01 01:45:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:03.988711 | orchestrator | 2026-01-01 01:45:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:03.992260 | orchestrator | 2026-01-01 01:45:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:03.992326 | orchestrator | 2026-01-01 01:45:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:07.050009 | orchestrator | 2026-01-01 01:45:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:07.051544 | orchestrator | 2026-01-01 01:45:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:07.051601 | orchestrator | 2026-01-01 01:45:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:10.095685 | orchestrator | 2026-01-01 01:45:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:10.096350 | orchestrator | 2026-01-01 01:45:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:10.096432 | orchestrator | 2026-01-01 01:45:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:13.145993 | orchestrator | 2026-01-01 01:45:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:13.148105 | orchestrator | 2026-01-01 01:45:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:13.148152 | orchestrator | 2026-01-01 01:45:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:16.194854 | orchestrator | 2026-01-01 01:45:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:16.196236 | orchestrator | 2026-01-01 01:45:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:16.196471 | orchestrator | 2026-01-01 01:45:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:19.245638 | orchestrator | 2026-01-01 01:45:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:19.247035 | orchestrator | 2026-01-01 01:45:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:19.247331 | orchestrator | 2026-01-01 01:45:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:22.297321 | orchestrator | 2026-01-01 01:45:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:22.298788 | orchestrator | 2026-01-01 01:45:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:22.298833 | orchestrator | 2026-01-01 01:45:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:25.351760 | orchestrator | 2026-01-01 01:45:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:25.354551 | orchestrator | 2026-01-01 01:45:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:25.354600 | orchestrator | 2026-01-01 01:45:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:28.403273 | orchestrator | 2026-01-01 01:45:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:28.405089 | orchestrator | 2026-01-01 01:45:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:28.405126 | orchestrator | 2026-01-01 01:45:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:31.460638 | orchestrator | 2026-01-01 01:45:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:31.464552 | orchestrator | 2026-01-01 01:45:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:31.465606 | orchestrator | 2026-01-01 01:45:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:34.515673 | orchestrator | 2026-01-01 01:45:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:34.517843 | orchestrator | 2026-01-01 01:45:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:34.517903 | orchestrator | 2026-01-01 01:45:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:37.563107 | orchestrator | 2026-01-01 01:45:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:37.565487 | orchestrator | 2026-01-01 01:45:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:37.565575 | orchestrator | 2026-01-01 01:45:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:40.611785 | orchestrator | 2026-01-01 01:45:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:40.613601 | orchestrator | 2026-01-01 01:45:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:40.613738 | orchestrator | 2026-01-01 01:45:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:43.662337 | orchestrator | 2026-01-01 01:45:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:43.664413 | orchestrator | 2026-01-01 01:45:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:43.664462 | orchestrator | 2026-01-01 01:45:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:46.713251 | orchestrator | 2026-01-01 01:45:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:46.715264 | orchestrator | 2026-01-01 01:45:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:46.715345 | orchestrator | 2026-01-01 01:45:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:49.759924 | orchestrator | 2026-01-01 01:45:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:49.761894 | orchestrator | 2026-01-01 01:45:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:49.762535 | orchestrator | 2026-01-01 01:45:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:52.814697 | orchestrator | 2026-01-01 01:45:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:52.816726 | orchestrator | 2026-01-01 01:45:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:52.816877 | orchestrator | 2026-01-01 01:45:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:55.873301 | orchestrator | 2026-01-01 01:45:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:55.876178 | orchestrator | 2026-01-01 01:45:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:55.876544 | orchestrator | 2026-01-01 01:45:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:58.917769 | orchestrator | 2026-01-01 01:45:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:45:58.920200 | orchestrator | 2026-01-01 01:45:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:45:58.920274 | orchestrator | 2026-01-01 01:45:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:01.961904 | orchestrator | 2026-01-01 01:46:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:01.963670 | orchestrator | 2026-01-01 01:46:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:01.963732 | orchestrator | 2026-01-01 01:46:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:05.013002 | orchestrator | 2026-01-01 01:46:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:05.013676 | orchestrator | 2026-01-01 01:46:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:05.013721 | orchestrator | 2026-01-01 01:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:08.060703 | orchestrator | 2026-01-01 01:46:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:08.062827 | orchestrator | 2026-01-01 01:46:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:08.062891 | orchestrator | 2026-01-01 01:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:11.106731 | orchestrator | 2026-01-01 01:46:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:11.110182 | orchestrator | 2026-01-01 01:46:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:11.110240 | orchestrator | 2026-01-01 01:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:14.152610 | orchestrator | 2026-01-01 01:46:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:14.154425 | orchestrator | 2026-01-01 01:46:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:14.154465 | orchestrator | 2026-01-01 01:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:17.198865 | orchestrator | 2026-01-01 01:46:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:17.200290 | orchestrator | 2026-01-01 01:46:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:17.200326 | orchestrator | 2026-01-01 01:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:20.250084 | orchestrator | 2026-01-01 01:46:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:20.252851 | orchestrator | 2026-01-01 01:46:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:20.252973 | orchestrator | 2026-01-01 01:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:23.297274 | orchestrator | 2026-01-01 01:46:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:23.299377 | orchestrator | 2026-01-01 01:46:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:23.299429 | orchestrator | 2026-01-01 01:46:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:26.355179 | orchestrator | 2026-01-01 01:46:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:26.356846 | orchestrator | 2026-01-01 01:46:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:26.356881 | orchestrator | 2026-01-01 01:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:29.404889 | orchestrator | 2026-01-01 01:46:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:29.406377 | orchestrator | 2026-01-01 01:46:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:29.406436 | orchestrator | 2026-01-01 01:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:32.451683 | orchestrator | 2026-01-01 01:46:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:32.453816 | orchestrator | 2026-01-01 01:46:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:32.453839 | orchestrator | 2026-01-01 01:46:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:35.504042 | orchestrator | 2026-01-01 01:46:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:35.504593 | orchestrator | 2026-01-01 01:46:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:35.504625 | orchestrator | 2026-01-01 01:46:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:38.555134 | orchestrator | 2026-01-01 01:46:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:38.557267 | orchestrator | 2026-01-01 01:46:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:38.557295 | orchestrator | 2026-01-01 01:46:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:41.609674 | orchestrator | 2026-01-01 01:46:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:41.611211 | orchestrator | 2026-01-01 01:46:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:41.611255 | orchestrator | 2026-01-01 01:46:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:44.654423 | orchestrator | 2026-01-01 01:46:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:44.656350 | orchestrator | 2026-01-01 01:46:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:44.656386 | orchestrator | 2026-01-01 01:46:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:47.698419 | orchestrator | 2026-01-01 01:46:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:47.699937 | orchestrator | 2026-01-01 01:46:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:47.700092 | orchestrator | 2026-01-01 01:46:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:50.746817 | orchestrator | 2026-01-01 01:46:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:50.748232 | orchestrator | 2026-01-01 01:46:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:50.748305 | orchestrator | 2026-01-01 01:46:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:53.792822 | orchestrator | 2026-01-01 01:46:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:53.795518 | orchestrator | 2026-01-01 01:46:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:53.795558 | orchestrator | 2026-01-01 01:46:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:56.851910 | orchestrator | 2026-01-01 01:46:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:56.855525 | orchestrator | 2026-01-01 01:46:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:56.855578 | orchestrator | 2026-01-01 01:46:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:59.901383 | orchestrator | 2026-01-01 01:46:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:46:59.903142 | orchestrator | 2026-01-01 01:46:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:46:59.903213 | orchestrator | 2026-01-01 01:46:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:02.954961 | orchestrator | 2026-01-01 01:47:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:02.956162 | orchestrator | 2026-01-01 01:47:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:02.956272 | orchestrator | 2026-01-01 01:47:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:06.001513 | orchestrator | 2026-01-01 01:47:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:06.003403 | orchestrator | 2026-01-01 01:47:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:06.003453 | orchestrator | 2026-01-01 01:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:09.052496 | orchestrator | 2026-01-01 01:47:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:09.053918 | orchestrator | 2026-01-01 01:47:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:09.053942 | orchestrator | 2026-01-01 01:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:12.108310 | orchestrator | 2026-01-01 01:47:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:12.111040 | orchestrator | 2026-01-01 01:47:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:12.111085 | orchestrator | 2026-01-01 01:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:15.169123 | orchestrator | 2026-01-01 01:47:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:15.173280 | orchestrator | 2026-01-01 01:47:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:15.173548 | orchestrator | 2026-01-01 01:47:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:18.215474 | orchestrator | 2026-01-01 01:47:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:18.217245 | orchestrator | 2026-01-01 01:47:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:18.217264 | orchestrator | 2026-01-01 01:47:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:21.263933 | orchestrator | 2026-01-01 01:47:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:21.267752 | orchestrator | 2026-01-01 01:47:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:21.267926 | orchestrator | 2026-01-01 01:47:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:24.319584 | orchestrator | 2026-01-01 01:47:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:24.321371 | orchestrator | 2026-01-01 01:47:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:24.321519 | orchestrator | 2026-01-01 01:47:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:27.366194 | orchestrator | 2026-01-01 01:47:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:27.368494 | orchestrator | 2026-01-01 01:47:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:27.368615 | orchestrator | 2026-01-01 01:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:30.408267 | orchestrator | 2026-01-01 01:47:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:30.411167 | orchestrator | 2026-01-01 01:47:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:30.411259 | orchestrator | 2026-01-01 01:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:33.455855 | orchestrator | 2026-01-01 01:47:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:33.456840 | orchestrator | 2026-01-01 01:47:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:33.456889 | orchestrator | 2026-01-01 01:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:36.511649 | orchestrator | 2026-01-01 01:47:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:36.515562 | orchestrator | 2026-01-01 01:47:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:36.515620 | orchestrator | 2026-01-01 01:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:39.562714 | orchestrator | 2026-01-01 01:47:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:39.563800 | orchestrator | 2026-01-01 01:47:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:39.563833 | orchestrator | 2026-01-01 01:47:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:42.615514 | orchestrator | 2026-01-01 01:47:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:42.616630 | orchestrator | 2026-01-01 01:47:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:42.616721 | orchestrator | 2026-01-01 01:47:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:45.659306 | orchestrator | 2026-01-01 01:47:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:45.660871 | orchestrator | 2026-01-01 01:47:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:45.660944 | orchestrator | 2026-01-01 01:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:48.710206 | orchestrator | 2026-01-01 01:47:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:48.711880 | orchestrator | 2026-01-01 01:47:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:48.711931 | orchestrator | 2026-01-01 01:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:51.758217 | orchestrator | 2026-01-01 01:47:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:51.760162 | orchestrator | 2026-01-01 01:47:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:51.760292 | orchestrator | 2026-01-01 01:47:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:54.815297 | orchestrator | 2026-01-01 01:47:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:54.818781 | orchestrator | 2026-01-01 01:47:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:54.818820 | orchestrator | 2026-01-01 01:47:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:57.874630 | orchestrator | 2026-01-01 01:47:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:47:57.877578 | orchestrator | 2026-01-01 01:47:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:47:57.877717 | orchestrator | 2026-01-01 01:47:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:00.925307 | orchestrator | 2026-01-01 01:48:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:00.927239 | orchestrator | 2026-01-01 01:48:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:00.927299 | orchestrator | 2026-01-01 01:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:03.967858 | orchestrator | 2026-01-01 01:48:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:03.969786 | orchestrator | 2026-01-01 01:48:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:03.969833 | orchestrator | 2026-01-01 01:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:07.017877 | orchestrator | 2026-01-01 01:48:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:07.019888 | orchestrator | 2026-01-01 01:48:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:07.020010 | orchestrator | 2026-01-01 01:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:10.066437 | orchestrator | 2026-01-01 01:48:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:10.067078 | orchestrator | 2026-01-01 01:48:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:10.067169 | orchestrator | 2026-01-01 01:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:13.122544 | orchestrator | 2026-01-01 01:48:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:13.124552 | orchestrator | 2026-01-01 01:48:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:13.125176 | orchestrator | 2026-01-01 01:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:16.172533 | orchestrator | 2026-01-01 01:48:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:16.174460 | orchestrator | 2026-01-01 01:48:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:16.174599 | orchestrator | 2026-01-01 01:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:19.227465 | orchestrator | 2026-01-01 01:48:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:19.232156 | orchestrator | 2026-01-01 01:48:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:19.232862 | orchestrator | 2026-01-01 01:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:22.282300 | orchestrator | 2026-01-01 01:48:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:22.284468 | orchestrator | 2026-01-01 01:48:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:22.284587 | orchestrator | 2026-01-01 01:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:25.331753 | orchestrator | 2026-01-01 01:48:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:25.333249 | orchestrator | 2026-01-01 01:48:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:25.333326 | orchestrator | 2026-01-01 01:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:28.381012 | orchestrator | 2026-01-01 01:48:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:28.382137 | orchestrator | 2026-01-01 01:48:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:28.382238 | orchestrator | 2026-01-01 01:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:31.433477 | orchestrator | 2026-01-01 01:48:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:31.435640 | orchestrator | 2026-01-01 01:48:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:31.435810 | orchestrator | 2026-01-01 01:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:34.492672 | orchestrator | 2026-01-01 01:48:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:34.494322 | orchestrator | 2026-01-01 01:48:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:34.494519 | orchestrator | 2026-01-01 01:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:37.534577 | orchestrator | 2026-01-01 01:48:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:37.536603 | orchestrator | 2026-01-01 01:48:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:37.536663 | orchestrator | 2026-01-01 01:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:40.582008 | orchestrator | 2026-01-01 01:48:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:40.582646 | orchestrator | 2026-01-01 01:48:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:40.582680 | orchestrator | 2026-01-01 01:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:43.630624 | orchestrator | 2026-01-01 01:48:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:43.633730 | orchestrator | 2026-01-01 01:48:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:43.633789 | orchestrator | 2026-01-01 01:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:46.682474 | orchestrator | 2026-01-01 01:48:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:46.685416 | orchestrator | 2026-01-01 01:48:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:46.685468 | orchestrator | 2026-01-01 01:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:49.737833 | orchestrator | 2026-01-01 01:48:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:49.740540 | orchestrator | 2026-01-01 01:48:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:49.740591 | orchestrator | 2026-01-01 01:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:52.787077 | orchestrator | 2026-01-01 01:48:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:52.788014 | orchestrator | 2026-01-01 01:48:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:52.788061 | orchestrator | 2026-01-01 01:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:55.832601 | orchestrator | 2026-01-01 01:48:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:55.835175 | orchestrator | 2026-01-01 01:48:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:55.835278 | orchestrator | 2026-01-01 01:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:58.885962 | orchestrator | 2026-01-01 01:48:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:48:58.889096 | orchestrator | 2026-01-01 01:48:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:48:58.889271 | orchestrator | 2026-01-01 01:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:01.947415 | orchestrator | 2026-01-01 01:49:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:01.950320 | orchestrator | 2026-01-01 01:49:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:01.950390 | orchestrator | 2026-01-01 01:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:05.006165 | orchestrator | 2026-01-01 01:49:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:05.008110 | orchestrator | 2026-01-01 01:49:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:05.008166 | orchestrator | 2026-01-01 01:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:08.062280 | orchestrator | 2026-01-01 01:49:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:08.064590 | orchestrator | 2026-01-01 01:49:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:08.064638 | orchestrator | 2026-01-01 01:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:11.108082 | orchestrator | 2026-01-01 01:49:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:11.110573 | orchestrator | 2026-01-01 01:49:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:11.110664 | orchestrator | 2026-01-01 01:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:14.157709 | orchestrator | 2026-01-01 01:49:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:14.159875 | orchestrator | 2026-01-01 01:49:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:14.159907 | orchestrator | 2026-01-01 01:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:17.210514 | orchestrator | 2026-01-01 01:49:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:17.212466 | orchestrator | 2026-01-01 01:49:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:17.212974 | orchestrator | 2026-01-01 01:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:20.258681 | orchestrator | 2026-01-01 01:49:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:20.261609 | orchestrator | 2026-01-01 01:49:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:20.261656 | orchestrator | 2026-01-01 01:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:23.306313 | orchestrator | 2026-01-01 01:49:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:23.308622 | orchestrator | 2026-01-01 01:49:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:23.308675 | orchestrator | 2026-01-01 01:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:26.359592 | orchestrator | 2026-01-01 01:49:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:26.363118 | orchestrator | 2026-01-01 01:49:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:26.363178 | orchestrator | 2026-01-01 01:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:29.418370 | orchestrator | 2026-01-01 01:49:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:29.420563 | orchestrator | 2026-01-01 01:49:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:29.420608 | orchestrator | 2026-01-01 01:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:32.469260 | orchestrator | 2026-01-01 01:49:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:32.472310 | orchestrator | 2026-01-01 01:49:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:32.472369 | orchestrator | 2026-01-01 01:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:35.518391 | orchestrator | 2026-01-01 01:49:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:35.521161 | orchestrator | 2026-01-01 01:49:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:35.521214 | orchestrator | 2026-01-01 01:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:38.568736 | orchestrator | 2026-01-01 01:49:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:38.570395 | orchestrator | 2026-01-01 01:49:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:38.570447 | orchestrator | 2026-01-01 01:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:41.618480 | orchestrator | 2026-01-01 01:49:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:41.619855 | orchestrator | 2026-01-01 01:49:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:41.619892 | orchestrator | 2026-01-01 01:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:44.667229 | orchestrator | 2026-01-01 01:49:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:44.668843 | orchestrator | 2026-01-01 01:49:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:44.668887 | orchestrator | 2026-01-01 01:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:47.718571 | orchestrator | 2026-01-01 01:49:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:47.720134 | orchestrator | 2026-01-01 01:49:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:47.720192 | orchestrator | 2026-01-01 01:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:50.762688 | orchestrator | 2026-01-01 01:49:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:50.764282 | orchestrator | 2026-01-01 01:49:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:50.764327 | orchestrator | 2026-01-01 01:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:53.812181 | orchestrator | 2026-01-01 01:49:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:53.815289 | orchestrator | 2026-01-01 01:49:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:53.815346 | orchestrator | 2026-01-01 01:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:56.867049 | orchestrator | 2026-01-01 01:49:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:56.871272 | orchestrator | 2026-01-01 01:49:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:56.871319 | orchestrator | 2026-01-01 01:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:59.917835 | orchestrator | 2026-01-01 01:49:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:49:59.920276 | orchestrator | 2026-01-01 01:49:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:49:59.920481 | orchestrator | 2026-01-01 01:49:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:02.977559 | orchestrator | 2026-01-01 01:50:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:02.978457 | orchestrator | 2026-01-01 01:50:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:02.978547 | orchestrator | 2026-01-01 01:50:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:06.028037 | orchestrator | 2026-01-01 01:50:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:06.028677 | orchestrator | 2026-01-01 01:50:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:06.028793 | orchestrator | 2026-01-01 01:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:09.080541 | orchestrator | 2026-01-01 01:50:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:09.082446 | orchestrator | 2026-01-01 01:50:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:09.082803 | orchestrator | 2026-01-01 01:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:12.133125 | orchestrator | 2026-01-01 01:50:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:12.135574 | orchestrator | 2026-01-01 01:50:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:12.136275 | orchestrator | 2026-01-01 01:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:15.188311 | orchestrator | 2026-01-01 01:50:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:15.190674 | orchestrator | 2026-01-01 01:50:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:15.190847 | orchestrator | 2026-01-01 01:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:18.237446 | orchestrator | 2026-01-01 01:50:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:18.238776 | orchestrator | 2026-01-01 01:50:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:18.238805 | orchestrator | 2026-01-01 01:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:21.290732 | orchestrator | 2026-01-01 01:50:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:21.292438 | orchestrator | 2026-01-01 01:50:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:21.292485 | orchestrator | 2026-01-01 01:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:24.346092 | orchestrator | 2026-01-01 01:50:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:24.348241 | orchestrator | 2026-01-01 01:50:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:24.348352 | orchestrator | 2026-01-01 01:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:27.392964 | orchestrator | 2026-01-01 01:50:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:27.395012 | orchestrator | 2026-01-01 01:50:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:27.395046 | orchestrator | 2026-01-01 01:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:30.443849 | orchestrator | 2026-01-01 01:50:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:30.446273 | orchestrator | 2026-01-01 01:50:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:30.446322 | orchestrator | 2026-01-01 01:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:33.485469 | orchestrator | 2026-01-01 01:50:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:33.490214 | orchestrator | 2026-01-01 01:50:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:33.490271 | orchestrator | 2026-01-01 01:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:36.539384 | orchestrator | 2026-01-01 01:50:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:36.541002 | orchestrator | 2026-01-01 01:50:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:36.541070 | orchestrator | 2026-01-01 01:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:39.590574 | orchestrator | 2026-01-01 01:50:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:39.592985 | orchestrator | 2026-01-01 01:50:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:39.593247 | orchestrator | 2026-01-01 01:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:42.642270 | orchestrator | 2026-01-01 01:50:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:42.644104 | orchestrator | 2026-01-01 01:50:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:42.644255 | orchestrator | 2026-01-01 01:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:45.687852 | orchestrator | 2026-01-01 01:50:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:45.689748 | orchestrator | 2026-01-01 01:50:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:45.690150 | orchestrator | 2026-01-01 01:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:48.740556 | orchestrator | 2026-01-01 01:50:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:48.741978 | orchestrator | 2026-01-01 01:50:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:48.742093 | orchestrator | 2026-01-01 01:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:51.798572 | orchestrator | 2026-01-01 01:50:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:51.800578 | orchestrator | 2026-01-01 01:50:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:51.800608 | orchestrator | 2026-01-01 01:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:54.849968 | orchestrator | 2026-01-01 01:50:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:54.851930 | orchestrator | 2026-01-01 01:50:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:54.851965 | orchestrator | 2026-01-01 01:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:57.903792 | orchestrator | 2026-01-01 01:50:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:50:57.905869 | orchestrator | 2026-01-01 01:50:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:50:57.905983 | orchestrator | 2026-01-01 01:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:00.959608 | orchestrator | 2026-01-01 01:51:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:00.960653 | orchestrator | 2026-01-01 01:51:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:00.960683 | orchestrator | 2026-01-01 01:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:04.003464 | orchestrator | 2026-01-01 01:51:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:04.005778 | orchestrator | 2026-01-01 01:51:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:04.005944 | orchestrator | 2026-01-01 01:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:07.055103 | orchestrator | 2026-01-01 01:51:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:07.056599 | orchestrator | 2026-01-01 01:51:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:07.056663 | orchestrator | 2026-01-01 01:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:10.095257 | orchestrator | 2026-01-01 01:51:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:10.096143 | orchestrator | 2026-01-01 01:51:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:10.096194 | orchestrator | 2026-01-01 01:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:13.144460 | orchestrator | 2026-01-01 01:51:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:13.146727 | orchestrator | 2026-01-01 01:51:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:13.146796 | orchestrator | 2026-01-01 01:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:16.195966 | orchestrator | 2026-01-01 01:51:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:16.198744 | orchestrator | 2026-01-01 01:51:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:16.198806 | orchestrator | 2026-01-01 01:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:19.253620 | orchestrator | 2026-01-01 01:51:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:19.256773 | orchestrator | 2026-01-01 01:51:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:19.256865 | orchestrator | 2026-01-01 01:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:22.308493 | orchestrator | 2026-01-01 01:51:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:22.310278 | orchestrator | 2026-01-01 01:51:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:22.310350 | orchestrator | 2026-01-01 01:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:25.352289 | orchestrator | 2026-01-01 01:51:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:25.354691 | orchestrator | 2026-01-01 01:51:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:25.354755 | orchestrator | 2026-01-01 01:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:28.402265 | orchestrator | 2026-01-01 01:51:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:28.404821 | orchestrator | 2026-01-01 01:51:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:28.404875 | orchestrator | 2026-01-01 01:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:31.458555 | orchestrator | 2026-01-01 01:51:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:31.458988 | orchestrator | 2026-01-01 01:51:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:31.459236 | orchestrator | 2026-01-01 01:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:34.512079 | orchestrator | 2026-01-01 01:51:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:34.513976 | orchestrator | 2026-01-01 01:51:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:34.514104 | orchestrator | 2026-01-01 01:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:37.557225 | orchestrator | 2026-01-01 01:51:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:37.558754 | orchestrator | 2026-01-01 01:51:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:37.558785 | orchestrator | 2026-01-01 01:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:40.597734 | orchestrator | 2026-01-01 01:51:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:40.599146 | orchestrator | 2026-01-01 01:51:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:40.599199 | orchestrator | 2026-01-01 01:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:43.654593 | orchestrator | 2026-01-01 01:51:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:43.657156 | orchestrator | 2026-01-01 01:51:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:43.657291 | orchestrator | 2026-01-01 01:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:46.707179 | orchestrator | 2026-01-01 01:51:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:46.708546 | orchestrator | 2026-01-01 01:51:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:46.708578 | orchestrator | 2026-01-01 01:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:49.762390 | orchestrator | 2026-01-01 01:51:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:49.764278 | orchestrator | 2026-01-01 01:51:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:49.764333 | orchestrator | 2026-01-01 01:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:52.822498 | orchestrator | 2026-01-01 01:51:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:52.824230 | orchestrator | 2026-01-01 01:51:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:52.824247 | orchestrator | 2026-01-01 01:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:55.876215 | orchestrator | 2026-01-01 01:51:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:55.878807 | orchestrator | 2026-01-01 01:51:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:55.879204 | orchestrator | 2026-01-01 01:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:58.931008 | orchestrator | 2026-01-01 01:51:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:51:58.932396 | orchestrator | 2026-01-01 01:51:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:51:58.932411 | orchestrator | 2026-01-01 01:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:01.979968 | orchestrator | 2026-01-01 01:52:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:01.981526 | orchestrator | 2026-01-01 01:52:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:01.981846 | orchestrator | 2026-01-01 01:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:05.028038 | orchestrator | 2026-01-01 01:52:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:05.030689 | orchestrator | 2026-01-01 01:52:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:05.030738 | orchestrator | 2026-01-01 01:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:08.083450 | orchestrator | 2026-01-01 01:52:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:08.086800 | orchestrator | 2026-01-01 01:52:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:08.086868 | orchestrator | 2026-01-01 01:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:11.134915 | orchestrator | 2026-01-01 01:52:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:11.137928 | orchestrator | 2026-01-01 01:52:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:11.137990 | orchestrator | 2026-01-01 01:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:14.184857 | orchestrator | 2026-01-01 01:52:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:14.186002 | orchestrator | 2026-01-01 01:52:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:14.186094 | orchestrator | 2026-01-01 01:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:17.239955 | orchestrator | 2026-01-01 01:52:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:17.241002 | orchestrator | 2026-01-01 01:52:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:17.241056 | orchestrator | 2026-01-01 01:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:20.283762 | orchestrator | 2026-01-01 01:52:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:20.284357 | orchestrator | 2026-01-01 01:52:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:20.284469 | orchestrator | 2026-01-01 01:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:23.336995 | orchestrator | 2026-01-01 01:52:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:23.338107 | orchestrator | 2026-01-01 01:52:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:23.338128 | orchestrator | 2026-01-01 01:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:26.385640 | orchestrator | 2026-01-01 01:52:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:26.387255 | orchestrator | 2026-01-01 01:52:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:26.387275 | orchestrator | 2026-01-01 01:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:29.435346 | orchestrator | 2026-01-01 01:52:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:29.439896 | orchestrator | 2026-01-01 01:52:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:29.440204 | orchestrator | 2026-01-01 01:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:32.497207 | orchestrator | 2026-01-01 01:52:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:32.498844 | orchestrator | 2026-01-01 01:52:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:32.498924 | orchestrator | 2026-01-01 01:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:35.544134 | orchestrator | 2026-01-01 01:52:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:35.545018 | orchestrator | 2026-01-01 01:52:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:35.545047 | orchestrator | 2026-01-01 01:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:38.595277 | orchestrator | 2026-01-01 01:52:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:38.596685 | orchestrator | 2026-01-01 01:52:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:38.596738 | orchestrator | 2026-01-01 01:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:41.646253 | orchestrator | 2026-01-01 01:52:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:41.648150 | orchestrator | 2026-01-01 01:52:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:41.648270 | orchestrator | 2026-01-01 01:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:44.690942 | orchestrator | 2026-01-01 01:52:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:44.692805 | orchestrator | 2026-01-01 01:52:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:44.692912 | orchestrator | 2026-01-01 01:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:47.735500 | orchestrator | 2026-01-01 01:52:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:47.737832 | orchestrator | 2026-01-01 01:52:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:47.737962 | orchestrator | 2026-01-01 01:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:50.787420 | orchestrator | 2026-01-01 01:52:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:50.789814 | orchestrator | 2026-01-01 01:52:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:50.789892 | orchestrator | 2026-01-01 01:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:53.836637 | orchestrator | 2026-01-01 01:52:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:53.837052 | orchestrator | 2026-01-01 01:52:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:53.837103 | orchestrator | 2026-01-01 01:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:56.886263 | orchestrator | 2026-01-01 01:52:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:56.888236 | orchestrator | 2026-01-01 01:52:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:56.888287 | orchestrator | 2026-01-01 01:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:59.940005 | orchestrator | 2026-01-01 01:52:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:52:59.942630 | orchestrator | 2026-01-01 01:52:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:52:59.942724 | orchestrator | 2026-01-01 01:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:02.994099 | orchestrator | 2026-01-01 01:53:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:02.995279 | orchestrator | 2026-01-01 01:53:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:02.995309 | orchestrator | 2026-01-01 01:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:06.046264 | orchestrator | 2026-01-01 01:53:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:06.047988 | orchestrator | 2026-01-01 01:53:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:06.048061 | orchestrator | 2026-01-01 01:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:09.098298 | orchestrator | 2026-01-01 01:53:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:09.099639 | orchestrator | 2026-01-01 01:53:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:09.099933 | orchestrator | 2026-01-01 01:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:12.149643 | orchestrator | 2026-01-01 01:53:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:12.151477 | orchestrator | 2026-01-01 01:53:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:12.151557 | orchestrator | 2026-01-01 01:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:15.207530 | orchestrator | 2026-01-01 01:53:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:15.209383 | orchestrator | 2026-01-01 01:53:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:15.209707 | orchestrator | 2026-01-01 01:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:18.265357 | orchestrator | 2026-01-01 01:53:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:18.266599 | orchestrator | 2026-01-01 01:53:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:18.266844 | orchestrator | 2026-01-01 01:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:21.306067 | orchestrator | 2026-01-01 01:53:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:21.307377 | orchestrator | 2026-01-01 01:53:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:21.307424 | orchestrator | 2026-01-01 01:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:24.358719 | orchestrator | 2026-01-01 01:53:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:24.361162 | orchestrator | 2026-01-01 01:53:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:24.361401 | orchestrator | 2026-01-01 01:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:27.408976 | orchestrator | 2026-01-01 01:53:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:27.411196 | orchestrator | 2026-01-01 01:53:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:27.411233 | orchestrator | 2026-01-01 01:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:30.461332 | orchestrator | 2026-01-01 01:53:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:30.462257 | orchestrator | 2026-01-01 01:53:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:30.462419 | orchestrator | 2026-01-01 01:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:33.514690 | orchestrator | 2026-01-01 01:53:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:33.515885 | orchestrator | 2026-01-01 01:53:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:33.515914 | orchestrator | 2026-01-01 01:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:36.568009 | orchestrator | 2026-01-01 01:53:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:36.568973 | orchestrator | 2026-01-01 01:53:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:36.569160 | orchestrator | 2026-01-01 01:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:39.619893 | orchestrator | 2026-01-01 01:53:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:39.622239 | orchestrator | 2026-01-01 01:53:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:39.622317 | orchestrator | 2026-01-01 01:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:42.672136 | orchestrator | 2026-01-01 01:53:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:42.673588 | orchestrator | 2026-01-01 01:53:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:42.674198 | orchestrator | 2026-01-01 01:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:45.726629 | orchestrator | 2026-01-01 01:53:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:45.728490 | orchestrator | 2026-01-01 01:53:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:45.728730 | orchestrator | 2026-01-01 01:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:48.778476 | orchestrator | 2026-01-01 01:53:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:48.780740 | orchestrator | 2026-01-01 01:53:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:48.780901 | orchestrator | 2026-01-01 01:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:51.829688 | orchestrator | 2026-01-01 01:53:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:51.831035 | orchestrator | 2026-01-01 01:53:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:51.831104 | orchestrator | 2026-01-01 01:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:54.870250 | orchestrator | 2026-01-01 01:53:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:54.871536 | orchestrator | 2026-01-01 01:53:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:54.871560 | orchestrator | 2026-01-01 01:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:57.918783 | orchestrator | 2026-01-01 01:53:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:53:57.920422 | orchestrator | 2026-01-01 01:53:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:53:57.920468 | orchestrator | 2026-01-01 01:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:00.966570 | orchestrator | 2026-01-01 01:54:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:00.968983 | orchestrator | 2026-01-01 01:54:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:00.969123 | orchestrator | 2026-01-01 01:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:04.020030 | orchestrator | 2026-01-01 01:54:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:04.021576 | orchestrator | 2026-01-01 01:54:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:04.021726 | orchestrator | 2026-01-01 01:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:07.063810 | orchestrator | 2026-01-01 01:54:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:07.064199 | orchestrator | 2026-01-01 01:54:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:07.064222 | orchestrator | 2026-01-01 01:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:10.114342 | orchestrator | 2026-01-01 01:54:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:10.115130 | orchestrator | 2026-01-01 01:54:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:10.115171 | orchestrator | 2026-01-01 01:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:13.156982 | orchestrator | 2026-01-01 01:54:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:13.158398 | orchestrator | 2026-01-01 01:54:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:13.158414 | orchestrator | 2026-01-01 01:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:16.201401 | orchestrator | 2026-01-01 01:54:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:16.203562 | orchestrator | 2026-01-01 01:54:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:16.203953 | orchestrator | 2026-01-01 01:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:19.252686 | orchestrator | 2026-01-01 01:54:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:19.255036 | orchestrator | 2026-01-01 01:54:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:19.255123 | orchestrator | 2026-01-01 01:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:22.310599 | orchestrator | 2026-01-01 01:54:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:22.312791 | orchestrator | 2026-01-01 01:54:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:22.312932 | orchestrator | 2026-01-01 01:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:25.357201 | orchestrator | 2026-01-01 01:54:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:25.360690 | orchestrator | 2026-01-01 01:54:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:25.360746 | orchestrator | 2026-01-01 01:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:28.413341 | orchestrator | 2026-01-01 01:54:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:28.415993 | orchestrator | 2026-01-01 01:54:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:28.416070 | orchestrator | 2026-01-01 01:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:31.469898 | orchestrator | 2026-01-01 01:54:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:31.471175 | orchestrator | 2026-01-01 01:54:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:31.471209 | orchestrator | 2026-01-01 01:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:34.518646 | orchestrator | 2026-01-01 01:54:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:34.520558 | orchestrator | 2026-01-01 01:54:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:34.520746 | orchestrator | 2026-01-01 01:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:37.564754 | orchestrator | 2026-01-01 01:54:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:37.567346 | orchestrator | 2026-01-01 01:54:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:37.567506 | orchestrator | 2026-01-01 01:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:40.613387 | orchestrator | 2026-01-01 01:54:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:40.615593 | orchestrator | 2026-01-01 01:54:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:40.615670 | orchestrator | 2026-01-01 01:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:43.669459 | orchestrator | 2026-01-01 01:54:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:43.671207 | orchestrator | 2026-01-01 01:54:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:43.671252 | orchestrator | 2026-01-01 01:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:46.713530 | orchestrator | 2026-01-01 01:54:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:46.715771 | orchestrator | 2026-01-01 01:54:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:46.715899 | orchestrator | 2026-01-01 01:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:49.761114 | orchestrator | 2026-01-01 01:54:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:49.763394 | orchestrator | 2026-01-01 01:54:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:49.763449 | orchestrator | 2026-01-01 01:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:52.822879 | orchestrator | 2026-01-01 01:54:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:52.824343 | orchestrator | 2026-01-01 01:54:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:52.824399 | orchestrator | 2026-01-01 01:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:55.872214 | orchestrator | 2026-01-01 01:54:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:55.873376 | orchestrator | 2026-01-01 01:54:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:55.873423 | orchestrator | 2026-01-01 01:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:58.923701 | orchestrator | 2026-01-01 01:54:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:54:58.925582 | orchestrator | 2026-01-01 01:54:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:54:58.925672 | orchestrator | 2026-01-01 01:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:01.973727 | orchestrator | 2026-01-01 01:55:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:01.974757 | orchestrator | 2026-01-01 01:55:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:01.974795 | orchestrator | 2026-01-01 01:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:05.021012 | orchestrator | 2026-01-01 01:55:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:05.023106 | orchestrator | 2026-01-01 01:55:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:05.023974 | orchestrator | 2026-01-01 01:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:08.072539 | orchestrator | 2026-01-01 01:55:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:08.075660 | orchestrator | 2026-01-01 01:55:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:08.075708 | orchestrator | 2026-01-01 01:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:11.125078 | orchestrator | 2026-01-01 01:55:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:11.127339 | orchestrator | 2026-01-01 01:55:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:11.127415 | orchestrator | 2026-01-01 01:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:14.176028 | orchestrator | 2026-01-01 01:55:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:14.176322 | orchestrator | 2026-01-01 01:55:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:14.176356 | orchestrator | 2026-01-01 01:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:17.222725 | orchestrator | 2026-01-01 01:55:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:17.224477 | orchestrator | 2026-01-01 01:55:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:17.224535 | orchestrator | 2026-01-01 01:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:20.261786 | orchestrator | 2026-01-01 01:55:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:20.265086 | orchestrator | 2026-01-01 01:55:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:20.265154 | orchestrator | 2026-01-01 01:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:23.321973 | orchestrator | 2026-01-01 01:55:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:23.323288 | orchestrator | 2026-01-01 01:55:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:23.323307 | orchestrator | 2026-01-01 01:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:26.374173 | orchestrator | 2026-01-01 01:55:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:26.377175 | orchestrator | 2026-01-01 01:55:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:26.377364 | orchestrator | 2026-01-01 01:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:29.423497 | orchestrator | 2026-01-01 01:55:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:29.425897 | orchestrator | 2026-01-01 01:55:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:29.425951 | orchestrator | 2026-01-01 01:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:32.474646 | orchestrator | 2026-01-01 01:55:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:32.475290 | orchestrator | 2026-01-01 01:55:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:32.475321 | orchestrator | 2026-01-01 01:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:35.522175 | orchestrator | 2026-01-01 01:55:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:35.523374 | orchestrator | 2026-01-01 01:55:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:35.523564 | orchestrator | 2026-01-01 01:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:38.566893 | orchestrator | 2026-01-01 01:55:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:38.568598 | orchestrator | 2026-01-01 01:55:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:38.568655 | orchestrator | 2026-01-01 01:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:41.621365 | orchestrator | 2026-01-01 01:55:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:41.624149 | orchestrator | 2026-01-01 01:55:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:41.624306 | orchestrator | 2026-01-01 01:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:44.679248 | orchestrator | 2026-01-01 01:55:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:44.682184 | orchestrator | 2026-01-01 01:55:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:44.682270 | orchestrator | 2026-01-01 01:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:47.733959 | orchestrator | 2026-01-01 01:55:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:47.735539 | orchestrator | 2026-01-01 01:55:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:47.735579 | orchestrator | 2026-01-01 01:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:50.780025 | orchestrator | 2026-01-01 01:55:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:50.781677 | orchestrator | 2026-01-01 01:55:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:50.781735 | orchestrator | 2026-01-01 01:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:53.832616 | orchestrator | 2026-01-01 01:55:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:53.834594 | orchestrator | 2026-01-01 01:55:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:53.834621 | orchestrator | 2026-01-01 01:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:56.887197 | orchestrator | 2026-01-01 01:55:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:56.888858 | orchestrator | 2026-01-01 01:55:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:56.888900 | orchestrator | 2026-01-01 01:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:59.941540 | orchestrator | 2026-01-01 01:55:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:55:59.943050 | orchestrator | 2026-01-01 01:55:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:55:59.943094 | orchestrator | 2026-01-01 01:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:02.994643 | orchestrator | 2026-01-01 01:56:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:02.996378 | orchestrator | 2026-01-01 01:56:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:02.996444 | orchestrator | 2026-01-01 01:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:06.044746 | orchestrator | 2026-01-01 01:56:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:06.046423 | orchestrator | 2026-01-01 01:56:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:06.046496 | orchestrator | 2026-01-01 01:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:09.092849 | orchestrator | 2026-01-01 01:56:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:09.095131 | orchestrator | 2026-01-01 01:56:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:09.095200 | orchestrator | 2026-01-01 01:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:12.137348 | orchestrator | 2026-01-01 01:56:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:12.138353 | orchestrator | 2026-01-01 01:56:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:12.138496 | orchestrator | 2026-01-01 01:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:15.182103 | orchestrator | 2026-01-01 01:56:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:15.184255 | orchestrator | 2026-01-01 01:56:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:15.184365 | orchestrator | 2026-01-01 01:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:18.229700 | orchestrator | 2026-01-01 01:56:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:18.230997 | orchestrator | 2026-01-01 01:56:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:18.231018 | orchestrator | 2026-01-01 01:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:21.285823 | orchestrator | 2026-01-01 01:56:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:21.286197 | orchestrator | 2026-01-01 01:56:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:21.286234 | orchestrator | 2026-01-01 01:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:24.338603 | orchestrator | 2026-01-01 01:56:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:24.341121 | orchestrator | 2026-01-01 01:56:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:24.341184 | orchestrator | 2026-01-01 01:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:27.386403 | orchestrator | 2026-01-01 01:56:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:27.389808 | orchestrator | 2026-01-01 01:56:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:27.390208 | orchestrator | 2026-01-01 01:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:30.436988 | orchestrator | 2026-01-01 01:56:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:30.438302 | orchestrator | 2026-01-01 01:56:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:30.438426 | orchestrator | 2026-01-01 01:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:33.482478 | orchestrator | 2026-01-01 01:56:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:33.483419 | orchestrator | 2026-01-01 01:56:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:33.483569 | orchestrator | 2026-01-01 01:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:36.525365 | orchestrator | 2026-01-01 01:56:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:36.528070 | orchestrator | 2026-01-01 01:56:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:36.528129 | orchestrator | 2026-01-01 01:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:39.580894 | orchestrator | 2026-01-01 01:56:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:39.582818 | orchestrator | 2026-01-01 01:56:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:39.582930 | orchestrator | 2026-01-01 01:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:42.630366 | orchestrator | 2026-01-01 01:56:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:42.635290 | orchestrator | 2026-01-01 01:56:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:42.635375 | orchestrator | 2026-01-01 01:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:45.686400 | orchestrator | 2026-01-01 01:56:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:45.688340 | orchestrator | 2026-01-01 01:56:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:45.688392 | orchestrator | 2026-01-01 01:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:48.741846 | orchestrator | 2026-01-01 01:56:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:48.743315 | orchestrator | 2026-01-01 01:56:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:48.743361 | orchestrator | 2026-01-01 01:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:51.794828 | orchestrator | 2026-01-01 01:56:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:51.796930 | orchestrator | 2026-01-01 01:56:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:51.797028 | orchestrator | 2026-01-01 01:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:54.842453 | orchestrator | 2026-01-01 01:56:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:54.844442 | orchestrator | 2026-01-01 01:56:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:54.844484 | orchestrator | 2026-01-01 01:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:57.889663 | orchestrator | 2026-01-01 01:56:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:56:57.891597 | orchestrator | 2026-01-01 01:56:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:56:57.891647 | orchestrator | 2026-01-01 01:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:00.941128 | orchestrator | 2026-01-01 01:57:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:00.943605 | orchestrator | 2026-01-01 01:57:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:00.943702 | orchestrator | 2026-01-01 01:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:03.990150 | orchestrator | 2026-01-01 01:57:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:03.994227 | orchestrator | 2026-01-01 01:57:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:03.994369 | orchestrator | 2026-01-01 01:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:07.043921 | orchestrator | 2026-01-01 01:57:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:07.046213 | orchestrator | 2026-01-01 01:57:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:07.046301 | orchestrator | 2026-01-01 01:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:10.090622 | orchestrator | 2026-01-01 01:57:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:10.092363 | orchestrator | 2026-01-01 01:57:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:10.092458 | orchestrator | 2026-01-01 01:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:13.134557 | orchestrator | 2026-01-01 01:57:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:13.136800 | orchestrator | 2026-01-01 01:57:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:13.136852 | orchestrator | 2026-01-01 01:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:16.191529 | orchestrator | 2026-01-01 01:57:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:16.193037 | orchestrator | 2026-01-01 01:57:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:16.193290 | orchestrator | 2026-01-01 01:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:19.241736 | orchestrator | 2026-01-01 01:57:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:19.243665 | orchestrator | 2026-01-01 01:57:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:19.244369 | orchestrator | 2026-01-01 01:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:22.296214 | orchestrator | 2026-01-01 01:57:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:22.298352 | orchestrator | 2026-01-01 01:57:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:22.298509 | orchestrator | 2026-01-01 01:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:25.340358 | orchestrator | 2026-01-01 01:57:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:25.342161 | orchestrator | 2026-01-01 01:57:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:25.342235 | orchestrator | 2026-01-01 01:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:28.390626 | orchestrator | 2026-01-01 01:57:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:28.392387 | orchestrator | 2026-01-01 01:57:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:28.392441 | orchestrator | 2026-01-01 01:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:31.439875 | orchestrator | 2026-01-01 01:57:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:31.440859 | orchestrator | 2026-01-01 01:57:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:31.440896 | orchestrator | 2026-01-01 01:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:34.496492 | orchestrator | 2026-01-01 01:57:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:34.498065 | orchestrator | 2026-01-01 01:57:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:34.498090 | orchestrator | 2026-01-01 01:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:37.545394 | orchestrator | 2026-01-01 01:57:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:37.548048 | orchestrator | 2026-01-01 01:57:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:37.548129 | orchestrator | 2026-01-01 01:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:40.600374 | orchestrator | 2026-01-01 01:57:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:40.604255 | orchestrator | 2026-01-01 01:57:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:40.604314 | orchestrator | 2026-01-01 01:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:43.655561 | orchestrator | 2026-01-01 01:57:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:43.656831 | orchestrator | 2026-01-01 01:57:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:43.656858 | orchestrator | 2026-01-01 01:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:46.703282 | orchestrator | 2026-01-01 01:57:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:46.704533 | orchestrator | 2026-01-01 01:57:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:46.704614 | orchestrator | 2026-01-01 01:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:49.762877 | orchestrator | 2026-01-01 01:57:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:49.764585 | orchestrator | 2026-01-01 01:57:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:49.764619 | orchestrator | 2026-01-01 01:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:52.814581 | orchestrator | 2026-01-01 01:57:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:52.816679 | orchestrator | 2026-01-01 01:57:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:52.816769 | orchestrator | 2026-01-01 01:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:55.867475 | orchestrator | 2026-01-01 01:57:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:55.868374 | orchestrator | 2026-01-01 01:57:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:55.868445 | orchestrator | 2026-01-01 01:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:58.919000 | orchestrator | 2026-01-01 01:57:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:57:58.921410 | orchestrator | 2026-01-01 01:57:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:57:58.921479 | orchestrator | 2026-01-01 01:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:01.970756 | orchestrator | 2026-01-01 01:58:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:01.972239 | orchestrator | 2026-01-01 01:58:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:01.972298 | orchestrator | 2026-01-01 01:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:05.026339 | orchestrator | 2026-01-01 01:58:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:05.028091 | orchestrator | 2026-01-01 01:58:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:05.028145 | orchestrator | 2026-01-01 01:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:08.072816 | orchestrator | 2026-01-01 01:58:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:08.074536 | orchestrator | 2026-01-01 01:58:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:08.074589 | orchestrator | 2026-01-01 01:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:11.120416 | orchestrator | 2026-01-01 01:58:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:11.122937 | orchestrator | 2026-01-01 01:58:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:11.122994 | orchestrator | 2026-01-01 01:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:14.170295 | orchestrator | 2026-01-01 01:58:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:14.171564 | orchestrator | 2026-01-01 01:58:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:14.171606 | orchestrator | 2026-01-01 01:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:17.217339 | orchestrator | 2026-01-01 01:58:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:17.218632 | orchestrator | 2026-01-01 01:58:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:17.218697 | orchestrator | 2026-01-01 01:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:20.268176 | orchestrator | 2026-01-01 01:58:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:20.270933 | orchestrator | 2026-01-01 01:58:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:20.270999 | orchestrator | 2026-01-01 01:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:23.321350 | orchestrator | 2026-01-01 01:58:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:23.323409 | orchestrator | 2026-01-01 01:58:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:23.323754 | orchestrator | 2026-01-01 01:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:26.373566 | orchestrator | 2026-01-01 01:58:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:26.375743 | orchestrator | 2026-01-01 01:58:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:26.375800 | orchestrator | 2026-01-01 01:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:29.424128 | orchestrator | 2026-01-01 01:58:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:29.427161 | orchestrator | 2026-01-01 01:58:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:29.427675 | orchestrator | 2026-01-01 01:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:32.474328 | orchestrator | 2026-01-01 01:58:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:32.477191 | orchestrator | 2026-01-01 01:58:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:32.477224 | orchestrator | 2026-01-01 01:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:35.522858 | orchestrator | 2026-01-01 01:58:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:35.523505 | orchestrator | 2026-01-01 01:58:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:35.523546 | orchestrator | 2026-01-01 01:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:38.576963 | orchestrator | 2026-01-01 01:58:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:38.580219 | orchestrator | 2026-01-01 01:58:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:38.580356 | orchestrator | 2026-01-01 01:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:41.635520 | orchestrator | 2026-01-01 01:58:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:41.638612 | orchestrator | 2026-01-01 01:58:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:41.638673 | orchestrator | 2026-01-01 01:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:44.688918 | orchestrator | 2026-01-01 01:58:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:44.691440 | orchestrator | 2026-01-01 01:58:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:44.691512 | orchestrator | 2026-01-01 01:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:47.743437 | orchestrator | 2026-01-01 01:58:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:47.746214 | orchestrator | 2026-01-01 01:58:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:47.746301 | orchestrator | 2026-01-01 01:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:50.796628 | orchestrator | 2026-01-01 01:58:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:50.798614 | orchestrator | 2026-01-01 01:58:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:50.798670 | orchestrator | 2026-01-01 01:58:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:53.850974 | orchestrator | 2026-01-01 01:58:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:53.852509 | orchestrator | 2026-01-01 01:58:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:53.852564 | orchestrator | 2026-01-01 01:58:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:56.895204 | orchestrator | 2026-01-01 01:58:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:56.896562 | orchestrator | 2026-01-01 01:58:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:56.896601 | orchestrator | 2026-01-01 01:58:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:59.939356 | orchestrator | 2026-01-01 01:58:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:58:59.941691 | orchestrator | 2026-01-01 01:58:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:58:59.941765 | orchestrator | 2026-01-01 01:58:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:02.987429 | orchestrator | 2026-01-01 01:59:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:02.989442 | orchestrator | 2026-01-01 01:59:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:02.989503 | orchestrator | 2026-01-01 01:59:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:06.034250 | orchestrator | 2026-01-01 01:59:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:06.035593 | orchestrator | 2026-01-01 01:59:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:06.035637 | orchestrator | 2026-01-01 01:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:09.088162 | orchestrator | 2026-01-01 01:59:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:09.089963 | orchestrator | 2026-01-01 01:59:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:09.090136 | orchestrator | 2026-01-01 01:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:12.138861 | orchestrator | 2026-01-01 01:59:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:12.141149 | orchestrator | 2026-01-01 01:59:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:12.141784 | orchestrator | 2026-01-01 01:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:15.196434 | orchestrator | 2026-01-01 01:59:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:15.198741 | orchestrator | 2026-01-01 01:59:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:15.198773 | orchestrator | 2026-01-01 01:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:18.252590 | orchestrator | 2026-01-01 01:59:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:18.253857 | orchestrator | 2026-01-01 01:59:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:18.253972 | orchestrator | 2026-01-01 01:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:21.295948 | orchestrator | 2026-01-01 01:59:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:21.296150 | orchestrator | 2026-01-01 01:59:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:21.296174 | orchestrator | 2026-01-01 01:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:24.346423 | orchestrator | 2026-01-01 01:59:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:24.348092 | orchestrator | 2026-01-01 01:59:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:24.348149 | orchestrator | 2026-01-01 01:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:27.389460 | orchestrator | 2026-01-01 01:59:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:27.391911 | orchestrator | 2026-01-01 01:59:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:27.391984 | orchestrator | 2026-01-01 01:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:30.439297 | orchestrator | 2026-01-01 01:59:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:30.441057 | orchestrator | 2026-01-01 01:59:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:30.441109 | orchestrator | 2026-01-01 01:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:33.503272 | orchestrator | 2026-01-01 01:59:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:33.504794 | orchestrator | 2026-01-01 01:59:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:33.504839 | orchestrator | 2026-01-01 01:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:36.554915 | orchestrator | 2026-01-01 01:59:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:36.558291 | orchestrator | 2026-01-01 01:59:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:36.558375 | orchestrator | 2026-01-01 01:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:39.610520 | orchestrator | 2026-01-01 01:59:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:39.613306 | orchestrator | 2026-01-01 01:59:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:39.613449 | orchestrator | 2026-01-01 01:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:42.667664 | orchestrator | 2026-01-01 01:59:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:42.671070 | orchestrator | 2026-01-01 01:59:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:42.671106 | orchestrator | 2026-01-01 01:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:45.721423 | orchestrator | 2026-01-01 01:59:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:45.722887 | orchestrator | 2026-01-01 01:59:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:45.722941 | orchestrator | 2026-01-01 01:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:48.771817 | orchestrator | 2026-01-01 01:59:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:48.773838 | orchestrator | 2026-01-01 01:59:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:48.773983 | orchestrator | 2026-01-01 01:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:51.822305 | orchestrator | 2026-01-01 01:59:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:51.823311 | orchestrator | 2026-01-01 01:59:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:51.823340 | orchestrator | 2026-01-01 01:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:54.874832 | orchestrator | 2026-01-01 01:59:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:54.876767 | orchestrator | 2026-01-01 01:59:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:54.876801 | orchestrator | 2026-01-01 01:59:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:57.927771 | orchestrator | 2026-01-01 01:59:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 01:59:57.928802 | orchestrator | 2026-01-01 01:59:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 01:59:57.928835 | orchestrator | 2026-01-01 01:59:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:00.977307 | orchestrator | 2026-01-01 02:00:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:00.979373 | orchestrator | 2026-01-01 02:00:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:00.979465 | orchestrator | 2026-01-01 02:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:04.028846 | orchestrator | 2026-01-01 02:00:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:04.031116 | orchestrator | 2026-01-01 02:00:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:04.031658 | orchestrator | 2026-01-01 02:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:07.082339 | orchestrator | 2026-01-01 02:00:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:07.083558 | orchestrator | 2026-01-01 02:00:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:07.083607 | orchestrator | 2026-01-01 02:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:10.133074 | orchestrator | 2026-01-01 02:00:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:10.134528 | orchestrator | 2026-01-01 02:00:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:10.134611 | orchestrator | 2026-01-01 02:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:13.183403 | orchestrator | 2026-01-01 02:00:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:13.184555 | orchestrator | 2026-01-01 02:00:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:13.184621 | orchestrator | 2026-01-01 02:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:16.236155 | orchestrator | 2026-01-01 02:00:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:16.238169 | orchestrator | 2026-01-01 02:00:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:16.238209 | orchestrator | 2026-01-01 02:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:19.292296 | orchestrator | 2026-01-01 02:00:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:19.294601 | orchestrator | 2026-01-01 02:00:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:19.294784 | orchestrator | 2026-01-01 02:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:22.345616 | orchestrator | 2026-01-01 02:00:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:22.348185 | orchestrator | 2026-01-01 02:00:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:22.348239 | orchestrator | 2026-01-01 02:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:25.402634 | orchestrator | 2026-01-01 02:00:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:25.404836 | orchestrator | 2026-01-01 02:00:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:25.404921 | orchestrator | 2026-01-01 02:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:28.457895 | orchestrator | 2026-01-01 02:00:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:28.460840 | orchestrator | 2026-01-01 02:00:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:28.460898 | orchestrator | 2026-01-01 02:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:31.509156 | orchestrator | 2026-01-01 02:00:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:31.510858 | orchestrator | 2026-01-01 02:00:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:31.510915 | orchestrator | 2026-01-01 02:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:34.567836 | orchestrator | 2026-01-01 02:00:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:34.568908 | orchestrator | 2026-01-01 02:00:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:34.568941 | orchestrator | 2026-01-01 02:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:37.616626 | orchestrator | 2026-01-01 02:00:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:37.618260 | orchestrator | 2026-01-01 02:00:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:37.618341 | orchestrator | 2026-01-01 02:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:40.668252 | orchestrator | 2026-01-01 02:00:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:40.669602 | orchestrator | 2026-01-01 02:00:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:40.669760 | orchestrator | 2026-01-01 02:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:43.709828 | orchestrator | 2026-01-01 02:00:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:43.712269 | orchestrator | 2026-01-01 02:00:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:43.712345 | orchestrator | 2026-01-01 02:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:46.760384 | orchestrator | 2026-01-01 02:00:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:46.761979 | orchestrator | 2026-01-01 02:00:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:46.762065 | orchestrator | 2026-01-01 02:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:49.813440 | orchestrator | 2026-01-01 02:00:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:49.816169 | orchestrator | 2026-01-01 02:00:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:49.816218 | orchestrator | 2026-01-01 02:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:52.867737 | orchestrator | 2026-01-01 02:00:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:52.869817 | orchestrator | 2026-01-01 02:00:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:52.869943 | orchestrator | 2026-01-01 02:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:55.922522 | orchestrator | 2026-01-01 02:00:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:55.926760 | orchestrator | 2026-01-01 02:00:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:55.926845 | orchestrator | 2026-01-01 02:00:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:58.971047 | orchestrator | 2026-01-01 02:00:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:00:58.973823 | orchestrator | 2026-01-01 02:00:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:00:58.973883 | orchestrator | 2026-01-01 02:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:02.024645 | orchestrator | 2026-01-01 02:01:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:02.025608 | orchestrator | 2026-01-01 02:01:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:02.025630 | orchestrator | 2026-01-01 02:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:05.069577 | orchestrator | 2026-01-01 02:01:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:05.071413 | orchestrator | 2026-01-01 02:01:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:05.071482 | orchestrator | 2026-01-01 02:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:08.120473 | orchestrator | 2026-01-01 02:01:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:08.122009 | orchestrator | 2026-01-01 02:01:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:08.122178 | orchestrator | 2026-01-01 02:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:11.165156 | orchestrator | 2026-01-01 02:01:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:11.166996 | orchestrator | 2026-01-01 02:01:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:11.167039 | orchestrator | 2026-01-01 02:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:14.217869 | orchestrator | 2026-01-01 02:01:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:14.219180 | orchestrator | 2026-01-01 02:01:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:14.219269 | orchestrator | 2026-01-01 02:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:17.271323 | orchestrator | 2026-01-01 02:01:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:17.274096 | orchestrator | 2026-01-01 02:01:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:17.274136 | orchestrator | 2026-01-01 02:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:20.322319 | orchestrator | 2026-01-01 02:01:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:20.324438 | orchestrator | 2026-01-01 02:01:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:20.324493 | orchestrator | 2026-01-01 02:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:23.372434 | orchestrator | 2026-01-01 02:01:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:23.375796 | orchestrator | 2026-01-01 02:01:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:23.375994 | orchestrator | 2026-01-01 02:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:26.426376 | orchestrator | 2026-01-01 02:01:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:26.428053 | orchestrator | 2026-01-01 02:01:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:26.428137 | orchestrator | 2026-01-01 02:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:29.473147 | orchestrator | 2026-01-01 02:01:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:29.475175 | orchestrator | 2026-01-01 02:01:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:29.475221 | orchestrator | 2026-01-01 02:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:32.519947 | orchestrator | 2026-01-01 02:01:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:32.521288 | orchestrator | 2026-01-01 02:01:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:32.521372 | orchestrator | 2026-01-01 02:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:35.571191 | orchestrator | 2026-01-01 02:01:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:35.572186 | orchestrator | 2026-01-01 02:01:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:35.572239 | orchestrator | 2026-01-01 02:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:38.616555 | orchestrator | 2026-01-01 02:01:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:38.618851 | orchestrator | 2026-01-01 02:01:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:38.619219 | orchestrator | 2026-01-01 02:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:41.671536 | orchestrator | 2026-01-01 02:01:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:41.673745 | orchestrator | 2026-01-01 02:01:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:41.673814 | orchestrator | 2026-01-01 02:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:44.724601 | orchestrator | 2026-01-01 02:01:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:44.726376 | orchestrator | 2026-01-01 02:01:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:44.726415 | orchestrator | 2026-01-01 02:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:47.779780 | orchestrator | 2026-01-01 02:01:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:47.782580 | orchestrator | 2026-01-01 02:01:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:47.782698 | orchestrator | 2026-01-01 02:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:50.837060 | orchestrator | 2026-01-01 02:01:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:50.838792 | orchestrator | 2026-01-01 02:01:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:50.838836 | orchestrator | 2026-01-01 02:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:53.885152 | orchestrator | 2026-01-01 02:01:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:53.886978 | orchestrator | 2026-01-01 02:01:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:53.887014 | orchestrator | 2026-01-01 02:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:56.936800 | orchestrator | 2026-01-01 02:01:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:56.938684 | orchestrator | 2026-01-01 02:01:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:56.938824 | orchestrator | 2026-01-01 02:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:59.991001 | orchestrator | 2026-01-01 02:01:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:01:59.993366 | orchestrator | 2026-01-01 02:01:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:01:59.993411 | orchestrator | 2026-01-01 02:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:03.048440 | orchestrator | 2026-01-01 02:02:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:03.051059 | orchestrator | 2026-01-01 02:02:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:03.051135 | orchestrator | 2026-01-01 02:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:06.098311 | orchestrator | 2026-01-01 02:02:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:06.100396 | orchestrator | 2026-01-01 02:02:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:06.100445 | orchestrator | 2026-01-01 02:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:09.147348 | orchestrator | 2026-01-01 02:02:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:09.149248 | orchestrator | 2026-01-01 02:02:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:09.149303 | orchestrator | 2026-01-01 02:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:12.193298 | orchestrator | 2026-01-01 02:02:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:12.193617 | orchestrator | 2026-01-01 02:02:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:12.193661 | orchestrator | 2026-01-01 02:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:15.242135 | orchestrator | 2026-01-01 02:02:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:15.244582 | orchestrator | 2026-01-01 02:02:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:15.244620 | orchestrator | 2026-01-01 02:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:18.290323 | orchestrator | 2026-01-01 02:02:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:18.291480 | orchestrator | 2026-01-01 02:02:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:18.291751 | orchestrator | 2026-01-01 02:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:21.340223 | orchestrator | 2026-01-01 02:02:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:21.342774 | orchestrator | 2026-01-01 02:02:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:21.342845 | orchestrator | 2026-01-01 02:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:24.392957 | orchestrator | 2026-01-01 02:02:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:24.394254 | orchestrator | 2026-01-01 02:02:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:24.394426 | orchestrator | 2026-01-01 02:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:27.446454 | orchestrator | 2026-01-01 02:02:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:27.448170 | orchestrator | 2026-01-01 02:02:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:27.448203 | orchestrator | 2026-01-01 02:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:30.499292 | orchestrator | 2026-01-01 02:02:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:30.501512 | orchestrator | 2026-01-01 02:02:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:30.501554 | orchestrator | 2026-01-01 02:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:33.548293 | orchestrator | 2026-01-01 02:02:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:33.550565 | orchestrator | 2026-01-01 02:02:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:33.550872 | orchestrator | 2026-01-01 02:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:36.592122 | orchestrator | 2026-01-01 02:02:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:36.594195 | orchestrator | 2026-01-01 02:02:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:36.594451 | orchestrator | 2026-01-01 02:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:39.641905 | orchestrator | 2026-01-01 02:02:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:39.642979 | orchestrator | 2026-01-01 02:02:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:39.643252 | orchestrator | 2026-01-01 02:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:42.693135 | orchestrator | 2026-01-01 02:02:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:42.694337 | orchestrator | 2026-01-01 02:02:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:42.694433 | orchestrator | 2026-01-01 02:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:45.741362 | orchestrator | 2026-01-01 02:02:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:45.743166 | orchestrator | 2026-01-01 02:02:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:45.743223 | orchestrator | 2026-01-01 02:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:48.790513 | orchestrator | 2026-01-01 02:02:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:48.792926 | orchestrator | 2026-01-01 02:02:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:48.796114 | orchestrator | 2026-01-01 02:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:51.843479 | orchestrator | 2026-01-01 02:02:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:51.845469 | orchestrator | 2026-01-01 02:02:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:51.845545 | orchestrator | 2026-01-01 02:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:54.905319 | orchestrator | 2026-01-01 02:02:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:54.907200 | orchestrator | 2026-01-01 02:02:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:54.907257 | orchestrator | 2026-01-01 02:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:57.963082 | orchestrator | 2026-01-01 02:02:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:02:57.965654 | orchestrator | 2026-01-01 02:02:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:02:57.965712 | orchestrator | 2026-01-01 02:02:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:01.034593 | orchestrator | 2026-01-01 02:03:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:01.036282 | orchestrator | 2026-01-01 02:03:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:01.036342 | orchestrator | 2026-01-01 02:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:04.082116 | orchestrator | 2026-01-01 02:03:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:04.085920 | orchestrator | 2026-01-01 02:03:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:04.085982 | orchestrator | 2026-01-01 02:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:07.128721 | orchestrator | 2026-01-01 02:03:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:07.130159 | orchestrator | 2026-01-01 02:03:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:07.130275 | orchestrator | 2026-01-01 02:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:10.183151 | orchestrator | 2026-01-01 02:03:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:10.183705 | orchestrator | 2026-01-01 02:03:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:10.183795 | orchestrator | 2026-01-01 02:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:13.235841 | orchestrator | 2026-01-01 02:03:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:13.237979 | orchestrator | 2026-01-01 02:03:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:13.238152 | orchestrator | 2026-01-01 02:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:16.295669 | orchestrator | 2026-01-01 02:03:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:16.297249 | orchestrator | 2026-01-01 02:03:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:16.297351 | orchestrator | 2026-01-01 02:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:19.343486 | orchestrator | 2026-01-01 02:03:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:19.344834 | orchestrator | 2026-01-01 02:03:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:19.344895 | orchestrator | 2026-01-01 02:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:22.388145 | orchestrator | 2026-01-01 02:03:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:22.389891 | orchestrator | 2026-01-01 02:03:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:22.389930 | orchestrator | 2026-01-01 02:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:25.432509 | orchestrator | 2026-01-01 02:03:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:25.433931 | orchestrator | 2026-01-01 02:03:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:25.433980 | orchestrator | 2026-01-01 02:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:28.486284 | orchestrator | 2026-01-01 02:03:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:28.488362 | orchestrator | 2026-01-01 02:03:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:28.488397 | orchestrator | 2026-01-01 02:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:31.534979 | orchestrator | 2026-01-01 02:03:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:31.536691 | orchestrator | 2026-01-01 02:03:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:31.536780 | orchestrator | 2026-01-01 02:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:34.582226 | orchestrator | 2026-01-01 02:03:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:34.583273 | orchestrator | 2026-01-01 02:03:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:34.583302 | orchestrator | 2026-01-01 02:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:37.631094 | orchestrator | 2026-01-01 02:03:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:37.632906 | orchestrator | 2026-01-01 02:03:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:37.632928 | orchestrator | 2026-01-01 02:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:40.683107 | orchestrator | 2026-01-01 02:03:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:40.684692 | orchestrator | 2026-01-01 02:03:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:40.684720 | orchestrator | 2026-01-01 02:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:43.738296 | orchestrator | 2026-01-01 02:03:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:43.740469 | orchestrator | 2026-01-01 02:03:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:43.740519 | orchestrator | 2026-01-01 02:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:46.785527 | orchestrator | 2026-01-01 02:03:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:46.787512 | orchestrator | 2026-01-01 02:03:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:46.787539 | orchestrator | 2026-01-01 02:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:49.830677 | orchestrator | 2026-01-01 02:03:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:49.832724 | orchestrator | 2026-01-01 02:03:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:49.832772 | orchestrator | 2026-01-01 02:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:52.876507 | orchestrator | 2026-01-01 02:03:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:52.877426 | orchestrator | 2026-01-01 02:03:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:52.877523 | orchestrator | 2026-01-01 02:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:55.924343 | orchestrator | 2026-01-01 02:03:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:55.926483 | orchestrator | 2026-01-01 02:03:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:55.926567 | orchestrator | 2026-01-01 02:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:58.977446 | orchestrator | 2026-01-01 02:03:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:03:58.979086 | orchestrator | 2026-01-01 02:03:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:03:58.979163 | orchestrator | 2026-01-01 02:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:02.032198 | orchestrator | 2026-01-01 02:04:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:02.034167 | orchestrator | 2026-01-01 02:04:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:02.034195 | orchestrator | 2026-01-01 02:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:05.083998 | orchestrator | 2026-01-01 02:04:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:05.085912 | orchestrator | 2026-01-01 02:04:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:05.085971 | orchestrator | 2026-01-01 02:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:08.136430 | orchestrator | 2026-01-01 02:04:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:08.138734 | orchestrator | 2026-01-01 02:04:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:08.138816 | orchestrator | 2026-01-01 02:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:11.184697 | orchestrator | 2026-01-01 02:04:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:11.186740 | orchestrator | 2026-01-01 02:04:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:11.186779 | orchestrator | 2026-01-01 02:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:14.230974 | orchestrator | 2026-01-01 02:04:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:14.231949 | orchestrator | 2026-01-01 02:04:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:14.231985 | orchestrator | 2026-01-01 02:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:17.283837 | orchestrator | 2026-01-01 02:04:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:17.285703 | orchestrator | 2026-01-01 02:04:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:17.285760 | orchestrator | 2026-01-01 02:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:20.342713 | orchestrator | 2026-01-01 02:04:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:20.345349 | orchestrator | 2026-01-01 02:04:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:20.345427 | orchestrator | 2026-01-01 02:04:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:23.398312 | orchestrator | 2026-01-01 02:04:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:23.400304 | orchestrator | 2026-01-01 02:04:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:23.400354 | orchestrator | 2026-01-01 02:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:26.450608 | orchestrator | 2026-01-01 02:04:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:26.452167 | orchestrator | 2026-01-01 02:04:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:26.452245 | orchestrator | 2026-01-01 02:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:29.490814 | orchestrator | 2026-01-01 02:04:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:29.492073 | orchestrator | 2026-01-01 02:04:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:29.492141 | orchestrator | 2026-01-01 02:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:32.555309 | orchestrator | 2026-01-01 02:04:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:32.556989 | orchestrator | 2026-01-01 02:04:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:32.557022 | orchestrator | 2026-01-01 02:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:35.599301 | orchestrator | 2026-01-01 02:04:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:35.600249 | orchestrator | 2026-01-01 02:04:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:35.600326 | orchestrator | 2026-01-01 02:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:38.640437 | orchestrator | 2026-01-01 02:04:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:38.641248 | orchestrator | 2026-01-01 02:04:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:38.641284 | orchestrator | 2026-01-01 02:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:41.678486 | orchestrator | 2026-01-01 02:04:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:41.679712 | orchestrator | 2026-01-01 02:04:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:41.679755 | orchestrator | 2026-01-01 02:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:44.717027 | orchestrator | 2026-01-01 02:04:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:44.718390 | orchestrator | 2026-01-01 02:04:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:44.718451 | orchestrator | 2026-01-01 02:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:47.761629 | orchestrator | 2026-01-01 02:04:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:47.764672 | orchestrator | 2026-01-01 02:04:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:47.764718 | orchestrator | 2026-01-01 02:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:50.808010 | orchestrator | 2026-01-01 02:04:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:50.809787 | orchestrator | 2026-01-01 02:04:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:50.809861 | orchestrator | 2026-01-01 02:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:53.858173 | orchestrator | 2026-01-01 02:04:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:53.859730 | orchestrator | 2026-01-01 02:04:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:53.859772 | orchestrator | 2026-01-01 02:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:56.916488 | orchestrator | 2026-01-01 02:04:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:56.918418 | orchestrator | 2026-01-01 02:04:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:56.918475 | orchestrator | 2026-01-01 02:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:59.966859 | orchestrator | 2026-01-01 02:04:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:04:59.968917 | orchestrator | 2026-01-01 02:04:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:04:59.969790 | orchestrator | 2026-01-01 02:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:03.023780 | orchestrator | 2026-01-01 02:05:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:03.025346 | orchestrator | 2026-01-01 02:05:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:03.025645 | orchestrator | 2026-01-01 02:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:06.083487 | orchestrator | 2026-01-01 02:05:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:06.085539 | orchestrator | 2026-01-01 02:05:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:06.085671 | orchestrator | 2026-01-01 02:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:09.130808 | orchestrator | 2026-01-01 02:05:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:09.133637 | orchestrator | 2026-01-01 02:05:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:09.133669 | orchestrator | 2026-01-01 02:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:12.184192 | orchestrator | 2026-01-01 02:05:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:12.186123 | orchestrator | 2026-01-01 02:05:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:12.186186 | orchestrator | 2026-01-01 02:05:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:15.234340 | orchestrator | 2026-01-01 02:05:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:15.235815 | orchestrator | 2026-01-01 02:05:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:15.235863 | orchestrator | 2026-01-01 02:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:18.292715 | orchestrator | 2026-01-01 02:05:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:18.292802 | orchestrator | 2026-01-01 02:05:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:18.292813 | orchestrator | 2026-01-01 02:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:21.341962 | orchestrator | 2026-01-01 02:05:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:21.342252 | orchestrator | 2026-01-01 02:05:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:21.342387 | orchestrator | 2026-01-01 02:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:24.396184 | orchestrator | 2026-01-01 02:05:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:24.398828 | orchestrator | 2026-01-01 02:05:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:24.398896 | orchestrator | 2026-01-01 02:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:27.448889 | orchestrator | 2026-01-01 02:05:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:27.449788 | orchestrator | 2026-01-01 02:05:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:27.449822 | orchestrator | 2026-01-01 02:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:30.502930 | orchestrator | 2026-01-01 02:05:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:30.503848 | orchestrator | 2026-01-01 02:05:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:30.503928 | orchestrator | 2026-01-01 02:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:33.554456 | orchestrator | 2026-01-01 02:05:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:33.555720 | orchestrator | 2026-01-01 02:05:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:33.555791 | orchestrator | 2026-01-01 02:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:36.601417 | orchestrator | 2026-01-01 02:05:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:36.602998 | orchestrator | 2026-01-01 02:05:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:36.603044 | orchestrator | 2026-01-01 02:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:39.651729 | orchestrator | 2026-01-01 02:05:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:39.651900 | orchestrator | 2026-01-01 02:05:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:39.651919 | orchestrator | 2026-01-01 02:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:42.700113 | orchestrator | 2026-01-01 02:05:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:42.702292 | orchestrator | 2026-01-01 02:05:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:42.702359 | orchestrator | 2026-01-01 02:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:45.756347 | orchestrator | 2026-01-01 02:05:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:45.758530 | orchestrator | 2026-01-01 02:05:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:45.758632 | orchestrator | 2026-01-01 02:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:48.813901 | orchestrator | 2026-01-01 02:05:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:48.815584 | orchestrator | 2026-01-01 02:05:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:48.815656 | orchestrator | 2026-01-01 02:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:51.862698 | orchestrator | 2026-01-01 02:05:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:51.865074 | orchestrator | 2026-01-01 02:05:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:51.865141 | orchestrator | 2026-01-01 02:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:54.913704 | orchestrator | 2026-01-01 02:05:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:54.915792 | orchestrator | 2026-01-01 02:05:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:54.916743 | orchestrator | 2026-01-01 02:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:57.969721 | orchestrator | 2026-01-01 02:05:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:05:57.972635 | orchestrator | 2026-01-01 02:05:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:05:57.972699 | orchestrator | 2026-01-01 02:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:01.035395 | orchestrator | 2026-01-01 02:06:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:01.037225 | orchestrator | 2026-01-01 02:06:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:01.037284 | orchestrator | 2026-01-01 02:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:04.088272 | orchestrator | 2026-01-01 02:06:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:04.090285 | orchestrator | 2026-01-01 02:06:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:04.090340 | orchestrator | 2026-01-01 02:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:07.142602 | orchestrator | 2026-01-01 02:06:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:07.148647 | orchestrator | 2026-01-01 02:06:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:07.148731 | orchestrator | 2026-01-01 02:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:10.190311 | orchestrator | 2026-01-01 02:06:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:10.191406 | orchestrator | 2026-01-01 02:06:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:10.191752 | orchestrator | 2026-01-01 02:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:13.242311 | orchestrator | 2026-01-01 02:06:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:13.243844 | orchestrator | 2026-01-01 02:06:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:13.244216 | orchestrator | 2026-01-01 02:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:16.291810 | orchestrator | 2026-01-01 02:06:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:16.292751 | orchestrator | 2026-01-01 02:06:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:16.292935 | orchestrator | 2026-01-01 02:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:19.351474 | orchestrator | 2026-01-01 02:06:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:19.353034 | orchestrator | 2026-01-01 02:06:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:19.353075 | orchestrator | 2026-01-01 02:06:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:22.396787 | orchestrator | 2026-01-01 02:06:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:22.398534 | orchestrator | 2026-01-01 02:06:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:22.398619 | orchestrator | 2026-01-01 02:06:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:25.439959 | orchestrator | 2026-01-01 02:06:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:25.441307 | orchestrator | 2026-01-01 02:06:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:25.441366 | orchestrator | 2026-01-01 02:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:28.488178 | orchestrator | 2026-01-01 02:06:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:28.489600 | orchestrator | 2026-01-01 02:06:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:28.489624 | orchestrator | 2026-01-01 02:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:31.535424 | orchestrator | 2026-01-01 02:06:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:31.538421 | orchestrator | 2026-01-01 02:06:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:31.538478 | orchestrator | 2026-01-01 02:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:34.586193 | orchestrator | 2026-01-01 02:06:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:34.586371 | orchestrator | 2026-01-01 02:06:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:34.586390 | orchestrator | 2026-01-01 02:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:37.624371 | orchestrator | 2026-01-01 02:06:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:37.627531 | orchestrator | 2026-01-01 02:06:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:37.627640 | orchestrator | 2026-01-01 02:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:40.677485 | orchestrator | 2026-01-01 02:06:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:40.678310 | orchestrator | 2026-01-01 02:06:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:40.678338 | orchestrator | 2026-01-01 02:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:43.726004 | orchestrator | 2026-01-01 02:06:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:43.727852 | orchestrator | 2026-01-01 02:06:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:43.727880 | orchestrator | 2026-01-01 02:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:46.773839 | orchestrator | 2026-01-01 02:06:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:46.775682 | orchestrator | 2026-01-01 02:06:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:46.775757 | orchestrator | 2026-01-01 02:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:49.821951 | orchestrator | 2026-01-01 02:06:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:49.823748 | orchestrator | 2026-01-01 02:06:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:49.823926 | orchestrator | 2026-01-01 02:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:52.872763 | orchestrator | 2026-01-01 02:06:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:52.874475 | orchestrator | 2026-01-01 02:06:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:52.874591 | orchestrator | 2026-01-01 02:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:55.926136 | orchestrator | 2026-01-01 02:06:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:55.928304 | orchestrator | 2026-01-01 02:06:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:55.928384 | orchestrator | 2026-01-01 02:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:58.986861 | orchestrator | 2026-01-01 02:06:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:06:58.989070 | orchestrator | 2026-01-01 02:06:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:06:58.989106 | orchestrator | 2026-01-01 02:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:02.048938 | orchestrator | 2026-01-01 02:07:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:02.049834 | orchestrator | 2026-01-01 02:07:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:02.049867 | orchestrator | 2026-01-01 02:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:05.094145 | orchestrator | 2026-01-01 02:07:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:05.094411 | orchestrator | 2026-01-01 02:07:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:05.094449 | orchestrator | 2026-01-01 02:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:08.141453 | orchestrator | 2026-01-01 02:07:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:08.143811 | orchestrator | 2026-01-01 02:07:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:08.143864 | orchestrator | 2026-01-01 02:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:11.185301 | orchestrator | 2026-01-01 02:07:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:11.187296 | orchestrator | 2026-01-01 02:07:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:11.187362 | orchestrator | 2026-01-01 02:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:14.241360 | orchestrator | 2026-01-01 02:07:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:14.241711 | orchestrator | 2026-01-01 02:07:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:14.241739 | orchestrator | 2026-01-01 02:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:17.301472 | orchestrator | 2026-01-01 02:07:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:17.302556 | orchestrator | 2026-01-01 02:07:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:17.302624 | orchestrator | 2026-01-01 02:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:20.353779 | orchestrator | 2026-01-01 02:07:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:20.354458 | orchestrator | 2026-01-01 02:07:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:20.354480 | orchestrator | 2026-01-01 02:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:23.407160 | orchestrator | 2026-01-01 02:07:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:23.409002 | orchestrator | 2026-01-01 02:07:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:23.409133 | orchestrator | 2026-01-01 02:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:26.456411 | orchestrator | 2026-01-01 02:07:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:26.458486 | orchestrator | 2026-01-01 02:07:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:26.458589 | orchestrator | 2026-01-01 02:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:29.502705 | orchestrator | 2026-01-01 02:07:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:29.504809 | orchestrator | 2026-01-01 02:07:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:29.504954 | orchestrator | 2026-01-01 02:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:32.556853 | orchestrator | 2026-01-01 02:07:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:32.559671 | orchestrator | 2026-01-01 02:07:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:32.560068 | orchestrator | 2026-01-01 02:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:35.612978 | orchestrator | 2026-01-01 02:07:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:35.613216 | orchestrator | 2026-01-01 02:07:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:35.613703 | orchestrator | 2026-01-01 02:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:38.671019 | orchestrator | 2026-01-01 02:07:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:38.672696 | orchestrator | 2026-01-01 02:07:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:38.672853 | orchestrator | 2026-01-01 02:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:41.717026 | orchestrator | 2026-01-01 02:07:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:41.718590 | orchestrator | 2026-01-01 02:07:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:41.718667 | orchestrator | 2026-01-01 02:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:44.761992 | orchestrator | 2026-01-01 02:07:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:44.762861 | orchestrator | 2026-01-01 02:07:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:44.763307 | orchestrator | 2026-01-01 02:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:47.818324 | orchestrator | 2026-01-01 02:07:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:47.820806 | orchestrator | 2026-01-01 02:07:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:47.820839 | orchestrator | 2026-01-01 02:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:50.870739 | orchestrator | 2026-01-01 02:07:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:50.872785 | orchestrator | 2026-01-01 02:07:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:50.872810 | orchestrator | 2026-01-01 02:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:53.921782 | orchestrator | 2026-01-01 02:07:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:53.924811 | orchestrator | 2026-01-01 02:07:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:53.924889 | orchestrator | 2026-01-01 02:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:56.969586 | orchestrator | 2026-01-01 02:07:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:07:56.970910 | orchestrator | 2026-01-01 02:07:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:07:56.970973 | orchestrator | 2026-01-01 02:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:00.019187 | orchestrator | 2026-01-01 02:08:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:00.021260 | orchestrator | 2026-01-01 02:08:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:00.021354 | orchestrator | 2026-01-01 02:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:03.062376 | orchestrator | 2026-01-01 02:08:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:03.064301 | orchestrator | 2026-01-01 02:08:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:03.064353 | orchestrator | 2026-01-01 02:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:06.110386 | orchestrator | 2026-01-01 02:08:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:06.112209 | orchestrator | 2026-01-01 02:08:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:06.112276 | orchestrator | 2026-01-01 02:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:09.157747 | orchestrator | 2026-01-01 02:08:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:09.159465 | orchestrator | 2026-01-01 02:08:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:09.159532 | orchestrator | 2026-01-01 02:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:12.202873 | orchestrator | 2026-01-01 02:08:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:12.204695 | orchestrator | 2026-01-01 02:08:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:12.204753 | orchestrator | 2026-01-01 02:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:15.249436 | orchestrator | 2026-01-01 02:08:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:15.251111 | orchestrator | 2026-01-01 02:08:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:15.251145 | orchestrator | 2026-01-01 02:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:18.305027 | orchestrator | 2026-01-01 02:08:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:18.306890 | orchestrator | 2026-01-01 02:08:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:18.307390 | orchestrator | 2026-01-01 02:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:21.359199 | orchestrator | 2026-01-01 02:08:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:21.359978 | orchestrator | 2026-01-01 02:08:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:21.360020 | orchestrator | 2026-01-01 02:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:24.410950 | orchestrator | 2026-01-01 02:08:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:24.412671 | orchestrator | 2026-01-01 02:08:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:24.412704 | orchestrator | 2026-01-01 02:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:27.465030 | orchestrator | 2026-01-01 02:08:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:27.467797 | orchestrator | 2026-01-01 02:08:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:27.467821 | orchestrator | 2026-01-01 02:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:30.521297 | orchestrator | 2026-01-01 02:08:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:30.522621 | orchestrator | 2026-01-01 02:08:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:30.522763 | orchestrator | 2026-01-01 02:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:33.568067 | orchestrator | 2026-01-01 02:08:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:33.570343 | orchestrator | 2026-01-01 02:08:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:33.570414 | orchestrator | 2026-01-01 02:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:36.617470 | orchestrator | 2026-01-01 02:08:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:36.618844 | orchestrator | 2026-01-01 02:08:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:36.618906 | orchestrator | 2026-01-01 02:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:39.676370 | orchestrator | 2026-01-01 02:08:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:39.679013 | orchestrator | 2026-01-01 02:08:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:39.679059 | orchestrator | 2026-01-01 02:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:42.723838 | orchestrator | 2026-01-01 02:08:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:42.724870 | orchestrator | 2026-01-01 02:08:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:42.724903 | orchestrator | 2026-01-01 02:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:45.778685 | orchestrator | 2026-01-01 02:08:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:45.781980 | orchestrator | 2026-01-01 02:08:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:45.782191 | orchestrator | 2026-01-01 02:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:48.836015 | orchestrator | 2026-01-01 02:08:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:48.837688 | orchestrator | 2026-01-01 02:08:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:48.837727 | orchestrator | 2026-01-01 02:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:51.884447 | orchestrator | 2026-01-01 02:08:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:51.887069 | orchestrator | 2026-01-01 02:08:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:51.887121 | orchestrator | 2026-01-01 02:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:54.933003 | orchestrator | 2026-01-01 02:08:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:54.934856 | orchestrator | 2026-01-01 02:08:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:54.934896 | orchestrator | 2026-01-01 02:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:57.977913 | orchestrator | 2026-01-01 02:08:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:08:57.980242 | orchestrator | 2026-01-01 02:08:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:08:57.980304 | orchestrator | 2026-01-01 02:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:01.035804 | orchestrator | 2026-01-01 02:09:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:01.037373 | orchestrator | 2026-01-01 02:09:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:01.037403 | orchestrator | 2026-01-01 02:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:04.081865 | orchestrator | 2026-01-01 02:09:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:04.083851 | orchestrator | 2026-01-01 02:09:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:04.083907 | orchestrator | 2026-01-01 02:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:07.127314 | orchestrator | 2026-01-01 02:09:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:07.128859 | orchestrator | 2026-01-01 02:09:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:07.128960 | orchestrator | 2026-01-01 02:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:10.181623 | orchestrator | 2026-01-01 02:09:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:10.182224 | orchestrator | 2026-01-01 02:09:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:10.182267 | orchestrator | 2026-01-01 02:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:13.227896 | orchestrator | 2026-01-01 02:09:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:13.229657 | orchestrator | 2026-01-01 02:09:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:13.229724 | orchestrator | 2026-01-01 02:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:16.271300 | orchestrator | 2026-01-01 02:09:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:16.272245 | orchestrator | 2026-01-01 02:09:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:16.272695 | orchestrator | 2026-01-01 02:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:19.320937 | orchestrator | 2026-01-01 02:09:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:19.323282 | orchestrator | 2026-01-01 02:09:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:19.323346 | orchestrator | 2026-01-01 02:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:22.362332 | orchestrator | 2026-01-01 02:09:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:22.363454 | orchestrator | 2026-01-01 02:09:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:22.364274 | orchestrator | 2026-01-01 02:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:25.414975 | orchestrator | 2026-01-01 02:09:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:25.419203 | orchestrator | 2026-01-01 02:09:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:25.419268 | orchestrator | 2026-01-01 02:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:28.472165 | orchestrator | 2026-01-01 02:09:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:28.474289 | orchestrator | 2026-01-01 02:09:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:28.474430 | orchestrator | 2026-01-01 02:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:31.518359 | orchestrator | 2026-01-01 02:09:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:31.519711 | orchestrator | 2026-01-01 02:09:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:31.519754 | orchestrator | 2026-01-01 02:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:34.566359 | orchestrator | 2026-01-01 02:09:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:34.567600 | orchestrator | 2026-01-01 02:09:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:34.567649 | orchestrator | 2026-01-01 02:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:37.615801 | orchestrator | 2026-01-01 02:09:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:37.616955 | orchestrator | 2026-01-01 02:09:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:37.617010 | orchestrator | 2026-01-01 02:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:40.669047 | orchestrator | 2026-01-01 02:09:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:40.669538 | orchestrator | 2026-01-01 02:09:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:40.669830 | orchestrator | 2026-01-01 02:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:43.718504 | orchestrator | 2026-01-01 02:09:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:43.721238 | orchestrator | 2026-01-01 02:09:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:43.721293 | orchestrator | 2026-01-01 02:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:46.764396 | orchestrator | 2026-01-01 02:09:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:46.767355 | orchestrator | 2026-01-01 02:09:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:46.767427 | orchestrator | 2026-01-01 02:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:49.818384 | orchestrator | 2026-01-01 02:09:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:49.819970 | orchestrator | 2026-01-01 02:09:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:49.820243 | orchestrator | 2026-01-01 02:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:52.872422 | orchestrator | 2026-01-01 02:09:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:52.876105 | orchestrator | 2026-01-01 02:09:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:52.876213 | orchestrator | 2026-01-01 02:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:55.927815 | orchestrator | 2026-01-01 02:09:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:55.929898 | orchestrator | 2026-01-01 02:09:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:55.929937 | orchestrator | 2026-01-01 02:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:58.985091 | orchestrator | 2026-01-01 02:09:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:09:58.987177 | orchestrator | 2026-01-01 02:09:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:09:58.987266 | orchestrator | 2026-01-01 02:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:02.044376 | orchestrator | 2026-01-01 02:10:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:02.046550 | orchestrator | 2026-01-01 02:10:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:02.046584 | orchestrator | 2026-01-01 02:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:05.097821 | orchestrator | 2026-01-01 02:10:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:05.099299 | orchestrator | 2026-01-01 02:10:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:05.099343 | orchestrator | 2026-01-01 02:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:08.143562 | orchestrator | 2026-01-01 02:10:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:08.146412 | orchestrator | 2026-01-01 02:10:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:08.146522 | orchestrator | 2026-01-01 02:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:11.205858 | orchestrator | 2026-01-01 02:10:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:11.207914 | orchestrator | 2026-01-01 02:10:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:11.207960 | orchestrator | 2026-01-01 02:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:14.263077 | orchestrator | 2026-01-01 02:10:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:14.263357 | orchestrator | 2026-01-01 02:10:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:14.263417 | orchestrator | 2026-01-01 02:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:17.320904 | orchestrator | 2026-01-01 02:10:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:17.322073 | orchestrator | 2026-01-01 02:10:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:17.322112 | orchestrator | 2026-01-01 02:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:20.365977 | orchestrator | 2026-01-01 02:10:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:20.368481 | orchestrator | 2026-01-01 02:10:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:20.369321 | orchestrator | 2026-01-01 02:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:23.421725 | orchestrator | 2026-01-01 02:10:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:23.423626 | orchestrator | 2026-01-01 02:10:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:23.423679 | orchestrator | 2026-01-01 02:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:26.471892 | orchestrator | 2026-01-01 02:10:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:26.473265 | orchestrator | 2026-01-01 02:10:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:26.473297 | orchestrator | 2026-01-01 02:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:29.524335 | orchestrator | 2026-01-01 02:10:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:29.526264 | orchestrator | 2026-01-01 02:10:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:29.526297 | orchestrator | 2026-01-01 02:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:32.572160 | orchestrator | 2026-01-01 02:10:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:32.573682 | orchestrator | 2026-01-01 02:10:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:32.573732 | orchestrator | 2026-01-01 02:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:35.619829 | orchestrator | 2026-01-01 02:10:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:35.620380 | orchestrator | 2026-01-01 02:10:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:35.620453 | orchestrator | 2026-01-01 02:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:38.667810 | orchestrator | 2026-01-01 02:10:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:38.669525 | orchestrator | 2026-01-01 02:10:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:38.669597 | orchestrator | 2026-01-01 02:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:41.723148 | orchestrator | 2026-01-01 02:10:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:41.725306 | orchestrator | 2026-01-01 02:10:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:41.725929 | orchestrator | 2026-01-01 02:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:44.778338 | orchestrator | 2026-01-01 02:10:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:44.780176 | orchestrator | 2026-01-01 02:10:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:44.780228 | orchestrator | 2026-01-01 02:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:47.830391 | orchestrator | 2026-01-01 02:10:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:47.832885 | orchestrator | 2026-01-01 02:10:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:47.832955 | orchestrator | 2026-01-01 02:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:50.877035 | orchestrator | 2026-01-01 02:10:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:50.879579 | orchestrator | 2026-01-01 02:10:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:50.879627 | orchestrator | 2026-01-01 02:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:53.935350 | orchestrator | 2026-01-01 02:10:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:53.937253 | orchestrator | 2026-01-01 02:10:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:53.938524 | orchestrator | 2026-01-01 02:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:56.989162 | orchestrator | 2026-01-01 02:10:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:10:56.992505 | orchestrator | 2026-01-01 02:10:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:10:56.992555 | orchestrator | 2026-01-01 02:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:00.045947 | orchestrator | 2026-01-01 02:11:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:00.046375 | orchestrator | 2026-01-01 02:11:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:00.046560 | orchestrator | 2026-01-01 02:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:03.096747 | orchestrator | 2026-01-01 02:11:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:03.100094 | orchestrator | 2026-01-01 02:11:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:03.100227 | orchestrator | 2026-01-01 02:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:06.146839 | orchestrator | 2026-01-01 02:11:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:06.148203 | orchestrator | 2026-01-01 02:11:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:06.148250 | orchestrator | 2026-01-01 02:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:09.198789 | orchestrator | 2026-01-01 02:11:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:09.200134 | orchestrator | 2026-01-01 02:11:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:09.200177 | orchestrator | 2026-01-01 02:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:12.247750 | orchestrator | 2026-01-01 02:11:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:12.250350 | orchestrator | 2026-01-01 02:11:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:12.250458 | orchestrator | 2026-01-01 02:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:15.314519 | orchestrator | 2026-01-01 02:11:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:15.315136 | orchestrator | 2026-01-01 02:11:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:15.315943 | orchestrator | 2026-01-01 02:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:18.364173 | orchestrator | 2026-01-01 02:11:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:18.365848 | orchestrator | 2026-01-01 02:11:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:18.365891 | orchestrator | 2026-01-01 02:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:21.415982 | orchestrator | 2026-01-01 02:11:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:21.418452 | orchestrator | 2026-01-01 02:11:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:21.418489 | orchestrator | 2026-01-01 02:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:24.472881 | orchestrator | 2026-01-01 02:11:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:24.475430 | orchestrator | 2026-01-01 02:11:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:24.475483 | orchestrator | 2026-01-01 02:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:27.525283 | orchestrator | 2026-01-01 02:11:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:27.526899 | orchestrator | 2026-01-01 02:11:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:27.526984 | orchestrator | 2026-01-01 02:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:30.572098 | orchestrator | 2026-01-01 02:11:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:30.573497 | orchestrator | 2026-01-01 02:11:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:30.573549 | orchestrator | 2026-01-01 02:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:33.623711 | orchestrator | 2026-01-01 02:11:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:33.624957 | orchestrator | 2026-01-01 02:11:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:33.625008 | orchestrator | 2026-01-01 02:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:36.670957 | orchestrator | 2026-01-01 02:11:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:36.675083 | orchestrator | 2026-01-01 02:11:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:36.675179 | orchestrator | 2026-01-01 02:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:39.728534 | orchestrator | 2026-01-01 02:11:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:39.732955 | orchestrator | 2026-01-01 02:11:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:39.733051 | orchestrator | 2026-01-01 02:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:42.784997 | orchestrator | 2026-01-01 02:11:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:42.787584 | orchestrator | 2026-01-01 02:11:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:42.787649 | orchestrator | 2026-01-01 02:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:45.836064 | orchestrator | 2026-01-01 02:11:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:45.838734 | orchestrator | 2026-01-01 02:11:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:45.838800 | orchestrator | 2026-01-01 02:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:48.884623 | orchestrator | 2026-01-01 02:11:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:48.886573 | orchestrator | 2026-01-01 02:11:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:48.886749 | orchestrator | 2026-01-01 02:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:51.938894 | orchestrator | 2026-01-01 02:11:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:51.941730 | orchestrator | 2026-01-01 02:11:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:51.941786 | orchestrator | 2026-01-01 02:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:54.985783 | orchestrator | 2026-01-01 02:11:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:54.988682 | orchestrator | 2026-01-01 02:11:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:54.988736 | orchestrator | 2026-01-01 02:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:58.034808 | orchestrator | 2026-01-01 02:11:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:11:58.036665 | orchestrator | 2026-01-01 02:11:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:11:58.036715 | orchestrator | 2026-01-01 02:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:01.076538 | orchestrator | 2026-01-01 02:12:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:01.079056 | orchestrator | 2026-01-01 02:12:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:01.079104 | orchestrator | 2026-01-01 02:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:04.123915 | orchestrator | 2026-01-01 02:12:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:04.125568 | orchestrator | 2026-01-01 02:12:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:04.125663 | orchestrator | 2026-01-01 02:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:07.177312 | orchestrator | 2026-01-01 02:12:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:07.178750 | orchestrator | 2026-01-01 02:12:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:07.178778 | orchestrator | 2026-01-01 02:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:10.227172 | orchestrator | 2026-01-01 02:12:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:10.228632 | orchestrator | 2026-01-01 02:12:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:10.228678 | orchestrator | 2026-01-01 02:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:13.277015 | orchestrator | 2026-01-01 02:12:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:13.279678 | orchestrator | 2026-01-01 02:12:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:13.279784 | orchestrator | 2026-01-01 02:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:16.335312 | orchestrator | 2026-01-01 02:12:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:16.336171 | orchestrator | 2026-01-01 02:12:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:16.336226 | orchestrator | 2026-01-01 02:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:19.386081 | orchestrator | 2026-01-01 02:12:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:19.387713 | orchestrator | 2026-01-01 02:12:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:19.387791 | orchestrator | 2026-01-01 02:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:22.441309 | orchestrator | 2026-01-01 02:12:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:22.441955 | orchestrator | 2026-01-01 02:12:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:22.441990 | orchestrator | 2026-01-01 02:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:25.492798 | orchestrator | 2026-01-01 02:12:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:25.494770 | orchestrator | 2026-01-01 02:12:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:25.494866 | orchestrator | 2026-01-01 02:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:28.544755 | orchestrator | 2026-01-01 02:12:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:28.545643 | orchestrator | 2026-01-01 02:12:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:28.546109 | orchestrator | 2026-01-01 02:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:31.593389 | orchestrator | 2026-01-01 02:12:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:31.596801 | orchestrator | 2026-01-01 02:12:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:31.597582 | orchestrator | 2026-01-01 02:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:34.646418 | orchestrator | 2026-01-01 02:12:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:34.647962 | orchestrator | 2026-01-01 02:12:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:34.648030 | orchestrator | 2026-01-01 02:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:37.687613 | orchestrator | 2026-01-01 02:12:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:37.691012 | orchestrator | 2026-01-01 02:12:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:37.691066 | orchestrator | 2026-01-01 02:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:40.741125 | orchestrator | 2026-01-01 02:12:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:40.745100 | orchestrator | 2026-01-01 02:12:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:40.745229 | orchestrator | 2026-01-01 02:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:43.796595 | orchestrator | 2026-01-01 02:12:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:43.799380 | orchestrator | 2026-01-01 02:12:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:43.800315 | orchestrator | 2026-01-01 02:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:46.858329 | orchestrator | 2026-01-01 02:12:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:46.861395 | orchestrator | 2026-01-01 02:12:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:46.861432 | orchestrator | 2026-01-01 02:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:49.905932 | orchestrator | 2026-01-01 02:12:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:49.908479 | orchestrator | 2026-01-01 02:12:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:49.908541 | orchestrator | 2026-01-01 02:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:52.961903 | orchestrator | 2026-01-01 02:12:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:52.966329 | orchestrator | 2026-01-01 02:12:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:52.966442 | orchestrator | 2026-01-01 02:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:56.006984 | orchestrator | 2026-01-01 02:12:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:56.009026 | orchestrator | 2026-01-01 02:12:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:56.009065 | orchestrator | 2026-01-01 02:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:59.065174 | orchestrator | 2026-01-01 02:12:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:12:59.066602 | orchestrator | 2026-01-01 02:12:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:12:59.067020 | orchestrator | 2026-01-01 02:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:02.110824 | orchestrator | 2026-01-01 02:13:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:02.112046 | orchestrator | 2026-01-01 02:13:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:02.112097 | orchestrator | 2026-01-01 02:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:05.154418 | orchestrator | 2026-01-01 02:13:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:05.156642 | orchestrator | 2026-01-01 02:13:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:05.156679 | orchestrator | 2026-01-01 02:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:08.213789 | orchestrator | 2026-01-01 02:13:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:08.215998 | orchestrator | 2026-01-01 02:13:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:08.216542 | orchestrator | 2026-01-01 02:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:11.262895 | orchestrator | 2026-01-01 02:13:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:11.265727 | orchestrator | 2026-01-01 02:13:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:11.266637 | orchestrator | 2026-01-01 02:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:14.316659 | orchestrator | 2026-01-01 02:13:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:14.318556 | orchestrator | 2026-01-01 02:13:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:14.318682 | orchestrator | 2026-01-01 02:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:17.370843 | orchestrator | 2026-01-01 02:13:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:17.372882 | orchestrator | 2026-01-01 02:13:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:17.372926 | orchestrator | 2026-01-01 02:13:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:20.424455 | orchestrator | 2026-01-01 02:13:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:20.425996 | orchestrator | 2026-01-01 02:13:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:20.426204 | orchestrator | 2026-01-01 02:13:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:23.471046 | orchestrator | 2026-01-01 02:13:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:23.474603 | orchestrator | 2026-01-01 02:13:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:23.474658 | orchestrator | 2026-01-01 02:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:26.517387 | orchestrator | 2026-01-01 02:13:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:26.519787 | orchestrator | 2026-01-01 02:13:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:26.519855 | orchestrator | 2026-01-01 02:13:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:29.567238 | orchestrator | 2026-01-01 02:13:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:29.568932 | orchestrator | 2026-01-01 02:13:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:29.568996 | orchestrator | 2026-01-01 02:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:32.616512 | orchestrator | 2026-01-01 02:13:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:32.619137 | orchestrator | 2026-01-01 02:13:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:32.619174 | orchestrator | 2026-01-01 02:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:35.667107 | orchestrator | 2026-01-01 02:13:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:35.669663 | orchestrator | 2026-01-01 02:13:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:35.669741 | orchestrator | 2026-01-01 02:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:38.711211 | orchestrator | 2026-01-01 02:13:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:38.713698 | orchestrator | 2026-01-01 02:13:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:38.714886 | orchestrator | 2026-01-01 02:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:41.763105 | orchestrator | 2026-01-01 02:13:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:41.764458 | orchestrator | 2026-01-01 02:13:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:41.764561 | orchestrator | 2026-01-01 02:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:44.812947 | orchestrator | 2026-01-01 02:13:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:44.815518 | orchestrator | 2026-01-01 02:13:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:44.815536 | orchestrator | 2026-01-01 02:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:47.859063 | orchestrator | 2026-01-01 02:13:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:47.860398 | orchestrator | 2026-01-01 02:13:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:47.860426 | orchestrator | 2026-01-01 02:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:50.907428 | orchestrator | 2026-01-01 02:13:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:50.910093 | orchestrator | 2026-01-01 02:13:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:50.910159 | orchestrator | 2026-01-01 02:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:53.947714 | orchestrator | 2026-01-01 02:13:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:53.949386 | orchestrator | 2026-01-01 02:13:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:53.949460 | orchestrator | 2026-01-01 02:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:57.001731 | orchestrator | 2026-01-01 02:13:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:13:57.003176 | orchestrator | 2026-01-01 02:13:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:13:57.003269 | orchestrator | 2026-01-01 02:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:00.051545 | orchestrator | 2026-01-01 02:14:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:00.051622 | orchestrator | 2026-01-01 02:14:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:00.051629 | orchestrator | 2026-01-01 02:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:03.103076 | orchestrator | 2026-01-01 02:14:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:03.105643 | orchestrator | 2026-01-01 02:14:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:03.105834 | orchestrator | 2026-01-01 02:14:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:06.162157 | orchestrator | 2026-01-01 02:14:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:06.163979 | orchestrator | 2026-01-01 02:14:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:06.164021 | orchestrator | 2026-01-01 02:14:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:09.212676 | orchestrator | 2026-01-01 02:14:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:09.215832 | orchestrator | 2026-01-01 02:14:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:09.215867 | orchestrator | 2026-01-01 02:14:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:12.267998 | orchestrator | 2026-01-01 02:14:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:12.271626 | orchestrator | 2026-01-01 02:14:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:12.271766 | orchestrator | 2026-01-01 02:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:15.327705 | orchestrator | 2026-01-01 02:14:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:15.331721 | orchestrator | 2026-01-01 02:14:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:15.331760 | orchestrator | 2026-01-01 02:14:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:18.388185 | orchestrator | 2026-01-01 02:14:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:18.390178 | orchestrator | 2026-01-01 02:14:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:18.390219 | orchestrator | 2026-01-01 02:14:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:21.445147 | orchestrator | 2026-01-01 02:14:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:21.447455 | orchestrator | 2026-01-01 02:14:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:21.447479 | orchestrator | 2026-01-01 02:14:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:24.493399 | orchestrator | 2026-01-01 02:14:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:24.495629 | orchestrator | 2026-01-01 02:14:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:24.495691 | orchestrator | 2026-01-01 02:14:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:27.543187 | orchestrator | 2026-01-01 02:14:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:27.545682 | orchestrator | 2026-01-01 02:14:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:27.545840 | orchestrator | 2026-01-01 02:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:30.598259 | orchestrator | 2026-01-01 02:14:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:30.599805 | orchestrator | 2026-01-01 02:14:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:30.599835 | orchestrator | 2026-01-01 02:14:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:33.640136 | orchestrator | 2026-01-01 02:14:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:33.641101 | orchestrator | 2026-01-01 02:14:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:33.641132 | orchestrator | 2026-01-01 02:14:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:36.697471 | orchestrator | 2026-01-01 02:14:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:36.700135 | orchestrator | 2026-01-01 02:14:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:36.700220 | orchestrator | 2026-01-01 02:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:39.754306 | orchestrator | 2026-01-01 02:14:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:39.756340 | orchestrator | 2026-01-01 02:14:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:39.756437 | orchestrator | 2026-01-01 02:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:42.814772 | orchestrator | 2026-01-01 02:14:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:42.818721 | orchestrator | 2026-01-01 02:14:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:42.818838 | orchestrator | 2026-01-01 02:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:45.868897 | orchestrator | 2026-01-01 02:14:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:45.871242 | orchestrator | 2026-01-01 02:14:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:45.871585 | orchestrator | 2026-01-01 02:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:48.922667 | orchestrator | 2026-01-01 02:14:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:48.926606 | orchestrator | 2026-01-01 02:14:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:48.926858 | orchestrator | 2026-01-01 02:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:51.983792 | orchestrator | 2026-01-01 02:14:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:51.986712 | orchestrator | 2026-01-01 02:14:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:51.986788 | orchestrator | 2026-01-01 02:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:55.041623 | orchestrator | 2026-01-01 02:14:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:55.044655 | orchestrator | 2026-01-01 02:14:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:55.044721 | orchestrator | 2026-01-01 02:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:58.090240 | orchestrator | 2026-01-01 02:14:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:14:58.091141 | orchestrator | 2026-01-01 02:14:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:14:58.091633 | orchestrator | 2026-01-01 02:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:01.137154 | orchestrator | 2026-01-01 02:15:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:01.140042 | orchestrator | 2026-01-01 02:15:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:01.140111 | orchestrator | 2026-01-01 02:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:04.187518 | orchestrator | 2026-01-01 02:15:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:04.189443 | orchestrator | 2026-01-01 02:15:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:04.189523 | orchestrator | 2026-01-01 02:15:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:07.231946 | orchestrator | 2026-01-01 02:15:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:07.232525 | orchestrator | 2026-01-01 02:15:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:07.232574 | orchestrator | 2026-01-01 02:15:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:10.279593 | orchestrator | 2026-01-01 02:15:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:10.280284 | orchestrator | 2026-01-01 02:15:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:10.280574 | orchestrator | 2026-01-01 02:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:13.330520 | orchestrator | 2026-01-01 02:15:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:13.332400 | orchestrator | 2026-01-01 02:15:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:13.332423 | orchestrator | 2026-01-01 02:15:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:16.379849 | orchestrator | 2026-01-01 02:15:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:16.382874 | orchestrator | 2026-01-01 02:15:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:16.382931 | orchestrator | 2026-01-01 02:15:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:19.429631 | orchestrator | 2026-01-01 02:15:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:19.430842 | orchestrator | 2026-01-01 02:15:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:19.431056 | orchestrator | 2026-01-01 02:15:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:22.482890 | orchestrator | 2026-01-01 02:15:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:22.485824 | orchestrator | 2026-01-01 02:15:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:22.485992 | orchestrator | 2026-01-01 02:15:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:25.529866 | orchestrator | 2026-01-01 02:15:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:25.531785 | orchestrator | 2026-01-01 02:15:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:25.532148 | orchestrator | 2026-01-01 02:15:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:28.586759 | orchestrator | 2026-01-01 02:15:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:28.588553 | orchestrator | 2026-01-01 02:15:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:28.588603 | orchestrator | 2026-01-01 02:15:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:31.634916 | orchestrator | 2026-01-01 02:15:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:31.636643 | orchestrator | 2026-01-01 02:15:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:31.636706 | orchestrator | 2026-01-01 02:15:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:34.684826 | orchestrator | 2026-01-01 02:15:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:34.687843 | orchestrator | 2026-01-01 02:15:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:34.687939 | orchestrator | 2026-01-01 02:15:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:37.738187 | orchestrator | 2026-01-01 02:15:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:37.740186 | orchestrator | 2026-01-01 02:15:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:37.740414 | orchestrator | 2026-01-01 02:15:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:40.787069 | orchestrator | 2026-01-01 02:15:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:40.789068 | orchestrator | 2026-01-01 02:15:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:40.789097 | orchestrator | 2026-01-01 02:15:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:43.847047 | orchestrator | 2026-01-01 02:15:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:43.849959 | orchestrator | 2026-01-01 02:15:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:43.850134 | orchestrator | 2026-01-01 02:15:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:46.901624 | orchestrator | 2026-01-01 02:15:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:46.902863 | orchestrator | 2026-01-01 02:15:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:46.902977 | orchestrator | 2026-01-01 02:15:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:49.950104 | orchestrator | 2026-01-01 02:15:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:49.952173 | orchestrator | 2026-01-01 02:15:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:49.952225 | orchestrator | 2026-01-01 02:15:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:53.000517 | orchestrator | 2026-01-01 02:15:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:53.002605 | orchestrator | 2026-01-01 02:15:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:53.002663 | orchestrator | 2026-01-01 02:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:56.050675 | orchestrator | 2026-01-01 02:15:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:56.053658 | orchestrator | 2026-01-01 02:15:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:56.053726 | orchestrator | 2026-01-01 02:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:59.103447 | orchestrator | 2026-01-01 02:15:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:15:59.104767 | orchestrator | 2026-01-01 02:15:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:15:59.104852 | orchestrator | 2026-01-01 02:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:02.153711 | orchestrator | 2026-01-01 02:16:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:02.154416 | orchestrator | 2026-01-01 02:16:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:02.154566 | orchestrator | 2026-01-01 02:16:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:05.199591 | orchestrator | 2026-01-01 02:16:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:05.201228 | orchestrator | 2026-01-01 02:16:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:05.201270 | orchestrator | 2026-01-01 02:16:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:08.250458 | orchestrator | 2026-01-01 02:16:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:08.251764 | orchestrator | 2026-01-01 02:16:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:08.251823 | orchestrator | 2026-01-01 02:16:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:11.297217 | orchestrator | 2026-01-01 02:16:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:11.299673 | orchestrator | 2026-01-01 02:16:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:11.299774 | orchestrator | 2026-01-01 02:16:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:14.345422 | orchestrator | 2026-01-01 02:16:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:14.347644 | orchestrator | 2026-01-01 02:16:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:14.347729 | orchestrator | 2026-01-01 02:16:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:17.405815 | orchestrator | 2026-01-01 02:16:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:17.409063 | orchestrator | 2026-01-01 02:16:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:17.409498 | orchestrator | 2026-01-01 02:16:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:20.462674 | orchestrator | 2026-01-01 02:16:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:20.463931 | orchestrator | 2026-01-01 02:16:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:20.464053 | orchestrator | 2026-01-01 02:16:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:23.510602 | orchestrator | 2026-01-01 02:16:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:23.511159 | orchestrator | 2026-01-01 02:16:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:23.511248 | orchestrator | 2026-01-01 02:16:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:26.555098 | orchestrator | 2026-01-01 02:16:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:26.555436 | orchestrator | 2026-01-01 02:16:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:26.555467 | orchestrator | 2026-01-01 02:16:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:29.608543 | orchestrator | 2026-01-01 02:16:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:29.610278 | orchestrator | 2026-01-01 02:16:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:29.610490 | orchestrator | 2026-01-01 02:16:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:32.657594 | orchestrator | 2026-01-01 02:16:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:32.659774 | orchestrator | 2026-01-01 02:16:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:32.659830 | orchestrator | 2026-01-01 02:16:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:35.706731 | orchestrator | 2026-01-01 02:16:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:35.709109 | orchestrator | 2026-01-01 02:16:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:35.709327 | orchestrator | 2026-01-01 02:16:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:38.746999 | orchestrator | 2026-01-01 02:16:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:38.747801 | orchestrator | 2026-01-01 02:16:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:38.747947 | orchestrator | 2026-01-01 02:16:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:41.786769 | orchestrator | 2026-01-01 02:16:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:41.788298 | orchestrator | 2026-01-01 02:16:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:41.788313 | orchestrator | 2026-01-01 02:16:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:44.832192 | orchestrator | 2026-01-01 02:16:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:44.832717 | orchestrator | 2026-01-01 02:16:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:44.832756 | orchestrator | 2026-01-01 02:16:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:47.879104 | orchestrator | 2026-01-01 02:16:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:47.881253 | orchestrator | 2026-01-01 02:16:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:47.881356 | orchestrator | 2026-01-01 02:16:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:50.927102 | orchestrator | 2026-01-01 02:16:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:50.929237 | orchestrator | 2026-01-01 02:16:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:50.929355 | orchestrator | 2026-01-01 02:16:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:53.984400 | orchestrator | 2026-01-01 02:16:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:53.986566 | orchestrator | 2026-01-01 02:16:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:53.986605 | orchestrator | 2026-01-01 02:16:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:57.047762 | orchestrator | 2026-01-01 02:16:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:16:57.049638 | orchestrator | 2026-01-01 02:16:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:16:57.049699 | orchestrator | 2026-01-01 02:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:00.096633 | orchestrator | 2026-01-01 02:17:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:00.099939 | orchestrator | 2026-01-01 02:17:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:00.100041 | orchestrator | 2026-01-01 02:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:03.141200 | orchestrator | 2026-01-01 02:17:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:03.142972 | orchestrator | 2026-01-01 02:17:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:03.143006 | orchestrator | 2026-01-01 02:17:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:06.196773 | orchestrator | 2026-01-01 02:17:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:06.197802 | orchestrator | 2026-01-01 02:17:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:06.197868 | orchestrator | 2026-01-01 02:17:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:09.251563 | orchestrator | 2026-01-01 02:17:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:09.253650 | orchestrator | 2026-01-01 02:17:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:09.253707 | orchestrator | 2026-01-01 02:17:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:12.300521 | orchestrator | 2026-01-01 02:17:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:12.301421 | orchestrator | 2026-01-01 02:17:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:12.301458 | orchestrator | 2026-01-01 02:17:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:15.341396 | orchestrator | 2026-01-01 02:17:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:15.343151 | orchestrator | 2026-01-01 02:17:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:15.343209 | orchestrator | 2026-01-01 02:17:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:18.393658 | orchestrator | 2026-01-01 02:17:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:18.395643 | orchestrator | 2026-01-01 02:17:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:18.395715 | orchestrator | 2026-01-01 02:17:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:21.443777 | orchestrator | 2026-01-01 02:17:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:21.445717 | orchestrator | 2026-01-01 02:17:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:21.445764 | orchestrator | 2026-01-01 02:17:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:24.500620 | orchestrator | 2026-01-01 02:17:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:24.502729 | orchestrator | 2026-01-01 02:17:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:24.502798 | orchestrator | 2026-01-01 02:17:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:27.548834 | orchestrator | 2026-01-01 02:17:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:27.550980 | orchestrator | 2026-01-01 02:17:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:27.551021 | orchestrator | 2026-01-01 02:17:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:30.594796 | orchestrator | 2026-01-01 02:17:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:30.596185 | orchestrator | 2026-01-01 02:17:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:30.596243 | orchestrator | 2026-01-01 02:17:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:33.641827 | orchestrator | 2026-01-01 02:17:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:33.644198 | orchestrator | 2026-01-01 02:17:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:33.644268 | orchestrator | 2026-01-01 02:17:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:36.687158 | orchestrator | 2026-01-01 02:17:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:36.689788 | orchestrator | 2026-01-01 02:17:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:36.689882 | orchestrator | 2026-01-01 02:17:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:39.741826 | orchestrator | 2026-01-01 02:17:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:39.743837 | orchestrator | 2026-01-01 02:17:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:39.743859 | orchestrator | 2026-01-01 02:17:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:42.795511 | orchestrator | 2026-01-01 02:17:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:42.797768 | orchestrator | 2026-01-01 02:17:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:42.797846 | orchestrator | 2026-01-01 02:17:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:45.841883 | orchestrator | 2026-01-01 02:17:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:45.843784 | orchestrator | 2026-01-01 02:17:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:45.843892 | orchestrator | 2026-01-01 02:17:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:48.896971 | orchestrator | 2026-01-01 02:17:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:48.898805 | orchestrator | 2026-01-01 02:17:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:48.898861 | orchestrator | 2026-01-01 02:17:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:51.953493 | orchestrator | 2026-01-01 02:17:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:51.955169 | orchestrator | 2026-01-01 02:17:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:51.955203 | orchestrator | 2026-01-01 02:17:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:55.002658 | orchestrator | 2026-01-01 02:17:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:55.005389 | orchestrator | 2026-01-01 02:17:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:55.006011 | orchestrator | 2026-01-01 02:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:58.051865 | orchestrator | 2026-01-01 02:17:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:17:58.053027 | orchestrator | 2026-01-01 02:17:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:17:58.053478 | orchestrator | 2026-01-01 02:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:01.088915 | orchestrator | 2026-01-01 02:18:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:01.092028 | orchestrator | 2026-01-01 02:18:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:01.093367 | orchestrator | 2026-01-01 02:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:04.138057 | orchestrator | 2026-01-01 02:18:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:04.140653 | orchestrator | 2026-01-01 02:18:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:04.140708 | orchestrator | 2026-01-01 02:18:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:07.187831 | orchestrator | 2026-01-01 02:18:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:07.189103 | orchestrator | 2026-01-01 02:18:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:07.189209 | orchestrator | 2026-01-01 02:18:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:10.238252 | orchestrator | 2026-01-01 02:18:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:10.240835 | orchestrator | 2026-01-01 02:18:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:10.240903 | orchestrator | 2026-01-01 02:18:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:13.289824 | orchestrator | 2026-01-01 02:18:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:13.291201 | orchestrator | 2026-01-01 02:18:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:13.291249 | orchestrator | 2026-01-01 02:18:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:16.339760 | orchestrator | 2026-01-01 02:18:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:16.342579 | orchestrator | 2026-01-01 02:18:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:16.342628 | orchestrator | 2026-01-01 02:18:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:19.388909 | orchestrator | 2026-01-01 02:18:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:19.390633 | orchestrator | 2026-01-01 02:18:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:19.390695 | orchestrator | 2026-01-01 02:18:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:22.449189 | orchestrator | 2026-01-01 02:18:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:22.450567 | orchestrator | 2026-01-01 02:18:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:22.450605 | orchestrator | 2026-01-01 02:18:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:25.498312 | orchestrator | 2026-01-01 02:18:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:25.498962 | orchestrator | 2026-01-01 02:18:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:25.499176 | orchestrator | 2026-01-01 02:18:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:28.544819 | orchestrator | 2026-01-01 02:18:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:28.546251 | orchestrator | 2026-01-01 02:18:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:28.546289 | orchestrator | 2026-01-01 02:18:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:31.587060 | orchestrator | 2026-01-01 02:18:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:31.588343 | orchestrator | 2026-01-01 02:18:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:31.588552 | orchestrator | 2026-01-01 02:18:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:34.632922 | orchestrator | 2026-01-01 02:18:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:34.635544 | orchestrator | 2026-01-01 02:18:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:34.635629 | orchestrator | 2026-01-01 02:18:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:37.681202 | orchestrator | 2026-01-01 02:18:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:37.682637 | orchestrator | 2026-01-01 02:18:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:37.682804 | orchestrator | 2026-01-01 02:18:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:40.729056 | orchestrator | 2026-01-01 02:18:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:40.730189 | orchestrator | 2026-01-01 02:18:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:40.730240 | orchestrator | 2026-01-01 02:18:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:43.781433 | orchestrator | 2026-01-01 02:18:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:43.784506 | orchestrator | 2026-01-01 02:18:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:43.784584 | orchestrator | 2026-01-01 02:18:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:46.832924 | orchestrator | 2026-01-01 02:18:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:46.834337 | orchestrator | 2026-01-01 02:18:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:46.834441 | orchestrator | 2026-01-01 02:18:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:49.882968 | orchestrator | 2026-01-01 02:18:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:49.886771 | orchestrator | 2026-01-01 02:18:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:49.886839 | orchestrator | 2026-01-01 02:18:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:52.932953 | orchestrator | 2026-01-01 02:18:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:52.934280 | orchestrator | 2026-01-01 02:18:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:52.934330 | orchestrator | 2026-01-01 02:18:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:55.979945 | orchestrator | 2026-01-01 02:18:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:55.981710 | orchestrator | 2026-01-01 02:18:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:55.981734 | orchestrator | 2026-01-01 02:18:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:59.031309 | orchestrator | 2026-01-01 02:18:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:18:59.033500 | orchestrator | 2026-01-01 02:18:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:18:59.033569 | orchestrator | 2026-01-01 02:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:02.072883 | orchestrator | 2026-01-01 02:19:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:02.074272 | orchestrator | 2026-01-01 02:19:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:02.074313 | orchestrator | 2026-01-01 02:19:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:05.127443 | orchestrator | 2026-01-01 02:19:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:05.129525 | orchestrator | 2026-01-01 02:19:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:05.129614 | orchestrator | 2026-01-01 02:19:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:08.188307 | orchestrator | 2026-01-01 02:19:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:08.192632 | orchestrator | 2026-01-01 02:19:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:08.192818 | orchestrator | 2026-01-01 02:19:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:11.236306 | orchestrator | 2026-01-01 02:19:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:11.237643 | orchestrator | 2026-01-01 02:19:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:11.237675 | orchestrator | 2026-01-01 02:19:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:14.291149 | orchestrator | 2026-01-01 02:19:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:14.292979 | orchestrator | 2026-01-01 02:19:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:14.293059 | orchestrator | 2026-01-01 02:19:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:17.340475 | orchestrator | 2026-01-01 02:19:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:17.342011 | orchestrator | 2026-01-01 02:19:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:17.342113 | orchestrator | 2026-01-01 02:19:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:20.394967 | orchestrator | 2026-01-01 02:19:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:20.397840 | orchestrator | 2026-01-01 02:19:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:20.397894 | orchestrator | 2026-01-01 02:19:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:23.446447 | orchestrator | 2026-01-01 02:19:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:23.448250 | orchestrator | 2026-01-01 02:19:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:23.448287 | orchestrator | 2026-01-01 02:19:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:26.499472 | orchestrator | 2026-01-01 02:19:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:26.500891 | orchestrator | 2026-01-01 02:19:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:26.500914 | orchestrator | 2026-01-01 02:19:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:29.554526 | orchestrator | 2026-01-01 02:19:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:29.555193 | orchestrator | 2026-01-01 02:19:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:29.555226 | orchestrator | 2026-01-01 02:19:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:32.600828 | orchestrator | 2026-01-01 02:19:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:32.602680 | orchestrator | 2026-01-01 02:19:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:32.602756 | orchestrator | 2026-01-01 02:19:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:35.648718 | orchestrator | 2026-01-01 02:19:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:35.649727 | orchestrator | 2026-01-01 02:19:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:35.649946 | orchestrator | 2026-01-01 02:19:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:38.694744 | orchestrator | 2026-01-01 02:19:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:38.696070 | orchestrator | 2026-01-01 02:19:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:38.696098 | orchestrator | 2026-01-01 02:19:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:41.738351 | orchestrator | 2026-01-01 02:19:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:41.739406 | orchestrator | 2026-01-01 02:19:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:41.739442 | orchestrator | 2026-01-01 02:19:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:44.784889 | orchestrator | 2026-01-01 02:19:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:44.785705 | orchestrator | 2026-01-01 02:19:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:44.785751 | orchestrator | 2026-01-01 02:19:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:47.827662 | orchestrator | 2026-01-01 02:19:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:47.829868 | orchestrator | 2026-01-01 02:19:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:47.829914 | orchestrator | 2026-01-01 02:19:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:50.884151 | orchestrator | 2026-01-01 02:19:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:50.887016 | orchestrator | 2026-01-01 02:19:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:50.887107 | orchestrator | 2026-01-01 02:19:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:53.933529 | orchestrator | 2026-01-01 02:19:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:53.933800 | orchestrator | 2026-01-01 02:19:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:53.933838 | orchestrator | 2026-01-01 02:19:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:56.984025 | orchestrator | 2026-01-01 02:19:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:19:56.986178 | orchestrator | 2026-01-01 02:19:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:19:56.986264 | orchestrator | 2026-01-01 02:19:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:00.054552 | orchestrator | 2026-01-01 02:20:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:00.054647 | orchestrator | 2026-01-01 02:20:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:00.054659 | orchestrator | 2026-01-01 02:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:03.095167 | orchestrator | 2026-01-01 02:20:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:03.096323 | orchestrator | 2026-01-01 02:20:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:03.096367 | orchestrator | 2026-01-01 02:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:06.145623 | orchestrator | 2026-01-01 02:20:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:06.146757 | orchestrator | 2026-01-01 02:20:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:06.146987 | orchestrator | 2026-01-01 02:20:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:09.193899 | orchestrator | 2026-01-01 02:20:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:09.196464 | orchestrator | 2026-01-01 02:20:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:09.196599 | orchestrator | 2026-01-01 02:20:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:12.241596 | orchestrator | 2026-01-01 02:20:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:12.243806 | orchestrator | 2026-01-01 02:20:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:12.243835 | orchestrator | 2026-01-01 02:20:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:15.297302 | orchestrator | 2026-01-01 02:20:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:15.299218 | orchestrator | 2026-01-01 02:20:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:15.299280 | orchestrator | 2026-01-01 02:20:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:18.343832 | orchestrator | 2026-01-01 02:20:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:18.346235 | orchestrator | 2026-01-01 02:20:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:18.346280 | orchestrator | 2026-01-01 02:20:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:21.388122 | orchestrator | 2026-01-01 02:20:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:21.389765 | orchestrator | 2026-01-01 02:20:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:21.390236 | orchestrator | 2026-01-01 02:20:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:24.426352 | orchestrator | 2026-01-01 02:20:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:24.427379 | orchestrator | 2026-01-01 02:20:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:24.427498 | orchestrator | 2026-01-01 02:20:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:27.468675 | orchestrator | 2026-01-01 02:20:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:27.471692 | orchestrator | 2026-01-01 02:20:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:27.471757 | orchestrator | 2026-01-01 02:20:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:30.524641 | orchestrator | 2026-01-01 02:20:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:30.526337 | orchestrator | 2026-01-01 02:20:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:30.526356 | orchestrator | 2026-01-01 02:20:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:33.574338 | orchestrator | 2026-01-01 02:20:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:33.574924 | orchestrator | 2026-01-01 02:20:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:33.574964 | orchestrator | 2026-01-01 02:20:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:36.633211 | orchestrator | 2026-01-01 02:20:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:36.636775 | orchestrator | 2026-01-01 02:20:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:36.636838 | orchestrator | 2026-01-01 02:20:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:39.690370 | orchestrator | 2026-01-01 02:20:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:39.692381 | orchestrator | 2026-01-01 02:20:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:39.692462 | orchestrator | 2026-01-01 02:20:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:42.739687 | orchestrator | 2026-01-01 02:20:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:42.741867 | orchestrator | 2026-01-01 02:20:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:42.741966 | orchestrator | 2026-01-01 02:20:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:45.797101 | orchestrator | 2026-01-01 02:20:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:45.799177 | orchestrator | 2026-01-01 02:20:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:45.799261 | orchestrator | 2026-01-01 02:20:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:48.843307 | orchestrator | 2026-01-01 02:20:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:48.846109 | orchestrator | 2026-01-01 02:20:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:48.846194 | orchestrator | 2026-01-01 02:20:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:51.892054 | orchestrator | 2026-01-01 02:20:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:51.894135 | orchestrator | 2026-01-01 02:20:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:51.894172 | orchestrator | 2026-01-01 02:20:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:54.944281 | orchestrator | 2026-01-01 02:20:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:54.947244 | orchestrator | 2026-01-01 02:20:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:54.947559 | orchestrator | 2026-01-01 02:20:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:57.998777 | orchestrator | 2026-01-01 02:20:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:20:58.000952 | orchestrator | 2026-01-01 02:20:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:20:58.001177 | orchestrator | 2026-01-01 02:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:01.065548 | orchestrator | 2026-01-01 02:21:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:01.067275 | orchestrator | 2026-01-01 02:21:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:01.067319 | orchestrator | 2026-01-01 02:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:04.117207 | orchestrator | 2026-01-01 02:21:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:04.120260 | orchestrator | 2026-01-01 02:21:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:04.120800 | orchestrator | 2026-01-01 02:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:07.171008 | orchestrator | 2026-01-01 02:21:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:07.172646 | orchestrator | 2026-01-01 02:21:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:07.172697 | orchestrator | 2026-01-01 02:21:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:10.226352 | orchestrator | 2026-01-01 02:21:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:10.228792 | orchestrator | 2026-01-01 02:21:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:10.228987 | orchestrator | 2026-01-01 02:21:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:13.283053 | orchestrator | 2026-01-01 02:21:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:13.284718 | orchestrator | 2026-01-01 02:21:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:13.284799 | orchestrator | 2026-01-01 02:21:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:16.340341 | orchestrator | 2026-01-01 02:21:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:16.342324 | orchestrator | 2026-01-01 02:21:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:16.342462 | orchestrator | 2026-01-01 02:21:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:19.389147 | orchestrator | 2026-01-01 02:21:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:19.390984 | orchestrator | 2026-01-01 02:21:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:19.391173 | orchestrator | 2026-01-01 02:21:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:22.444500 | orchestrator | 2026-01-01 02:21:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:22.446649 | orchestrator | 2026-01-01 02:21:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:22.446783 | orchestrator | 2026-01-01 02:21:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:25.494158 | orchestrator | 2026-01-01 02:21:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:25.496903 | orchestrator | 2026-01-01 02:21:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:25.496991 | orchestrator | 2026-01-01 02:21:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:28.541054 | orchestrator | 2026-01-01 02:21:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:28.543372 | orchestrator | 2026-01-01 02:21:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:28.543546 | orchestrator | 2026-01-01 02:21:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:31.590255 | orchestrator | 2026-01-01 02:21:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:31.591797 | orchestrator | 2026-01-01 02:21:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:31.591841 | orchestrator | 2026-01-01 02:21:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:34.641919 | orchestrator | 2026-01-01 02:21:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:34.643157 | orchestrator | 2026-01-01 02:21:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:34.643239 | orchestrator | 2026-01-01 02:21:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:37.689786 | orchestrator | 2026-01-01 02:21:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:37.693269 | orchestrator | 2026-01-01 02:21:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:37.693887 | orchestrator | 2026-01-01 02:21:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:40.753008 | orchestrator | 2026-01-01 02:21:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:40.755936 | orchestrator | 2026-01-01 02:21:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:40.756008 | orchestrator | 2026-01-01 02:21:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:43.798682 | orchestrator | 2026-01-01 02:21:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:43.799531 | orchestrator | 2026-01-01 02:21:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:43.799582 | orchestrator | 2026-01-01 02:21:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:46.837073 | orchestrator | 2026-01-01 02:21:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:46.837990 | orchestrator | 2026-01-01 02:21:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:46.838122 | orchestrator | 2026-01-01 02:21:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:49.879540 | orchestrator | 2026-01-01 02:21:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:49.881038 | orchestrator | 2026-01-01 02:21:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:49.881074 | orchestrator | 2026-01-01 02:21:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:52.921744 | orchestrator | 2026-01-01 02:21:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:52.924848 | orchestrator | 2026-01-01 02:21:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:52.924901 | orchestrator | 2026-01-01 02:21:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:55.975864 | orchestrator | 2026-01-01 02:21:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:55.976705 | orchestrator | 2026-01-01 02:21:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:55.976804 | orchestrator | 2026-01-01 02:21:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:59.031509 | orchestrator | 2026-01-01 02:21:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:21:59.033290 | orchestrator | 2026-01-01 02:21:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:21:59.033392 | orchestrator | 2026-01-01 02:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:02.092195 | orchestrator | 2026-01-01 02:22:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:02.093563 | orchestrator | 2026-01-01 02:22:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:02.093605 | orchestrator | 2026-01-01 02:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:05.138004 | orchestrator | 2026-01-01 02:22:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:05.138944 | orchestrator | 2026-01-01 02:22:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:05.138975 | orchestrator | 2026-01-01 02:22:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:08.179624 | orchestrator | 2026-01-01 02:22:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:08.181215 | orchestrator | 2026-01-01 02:22:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:08.181264 | orchestrator | 2026-01-01 02:22:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:11.230219 | orchestrator | 2026-01-01 02:22:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:11.230586 | orchestrator | 2026-01-01 02:22:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:11.230616 | orchestrator | 2026-01-01 02:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:14.288972 | orchestrator | 2026-01-01 02:22:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:14.290734 | orchestrator | 2026-01-01 02:22:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:14.290780 | orchestrator | 2026-01-01 02:22:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:17.357729 | orchestrator | 2026-01-01 02:22:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:17.360066 | orchestrator | 2026-01-01 02:22:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:17.360577 | orchestrator | 2026-01-01 02:22:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:20.413563 | orchestrator | 2026-01-01 02:22:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:20.416236 | orchestrator | 2026-01-01 02:22:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:20.416292 | orchestrator | 2026-01-01 02:22:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:23.470938 | orchestrator | 2026-01-01 02:22:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:23.473073 | orchestrator | 2026-01-01 02:22:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:23.473198 | orchestrator | 2026-01-01 02:22:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:26.527022 | orchestrator | 2026-01-01 02:22:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:26.528740 | orchestrator | 2026-01-01 02:22:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:26.528867 | orchestrator | 2026-01-01 02:22:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:29.575252 | orchestrator | 2026-01-01 02:22:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:29.575858 | orchestrator | 2026-01-01 02:22:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:29.575880 | orchestrator | 2026-01-01 02:22:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:32.631510 | orchestrator | 2026-01-01 02:22:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:32.636450 | orchestrator | 2026-01-01 02:22:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:32.636545 | orchestrator | 2026-01-01 02:22:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:35.698642 | orchestrator | 2026-01-01 02:22:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:35.701809 | orchestrator | 2026-01-01 02:22:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:35.701854 | orchestrator | 2026-01-01 02:22:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:38.758317 | orchestrator | 2026-01-01 02:22:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:38.760483 | orchestrator | 2026-01-01 02:22:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:38.760522 | orchestrator | 2026-01-01 02:22:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:41.815173 | orchestrator | 2026-01-01 02:22:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:41.817918 | orchestrator | 2026-01-01 02:22:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:41.818135 | orchestrator | 2026-01-01 02:22:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:44.865153 | orchestrator | 2026-01-01 02:22:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:44.866568 | orchestrator | 2026-01-01 02:22:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:44.866602 | orchestrator | 2026-01-01 02:22:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:47.917342 | orchestrator | 2026-01-01 02:22:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:47.921451 | orchestrator | 2026-01-01 02:22:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:47.921508 | orchestrator | 2026-01-01 02:22:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:50.974964 | orchestrator | 2026-01-01 02:22:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:50.977839 | orchestrator | 2026-01-01 02:22:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:50.978083 | orchestrator | 2026-01-01 02:22:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:54.034569 | orchestrator | 2026-01-01 02:22:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:54.037511 | orchestrator | 2026-01-01 02:22:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:54.037945 | orchestrator | 2026-01-01 02:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:57.089987 | orchestrator | 2026-01-01 02:22:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:22:57.092268 | orchestrator | 2026-01-01 02:22:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:22:57.092980 | orchestrator | 2026-01-01 02:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:00.141910 | orchestrator | 2026-01-01 02:23:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:00.146996 | orchestrator | 2026-01-01 02:23:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:00.147098 | orchestrator | 2026-01-01 02:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:03.200548 | orchestrator | 2026-01-01 02:23:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:03.201687 | orchestrator | 2026-01-01 02:23:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:03.201761 | orchestrator | 2026-01-01 02:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:06.255192 | orchestrator | 2026-01-01 02:23:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:06.257181 | orchestrator | 2026-01-01 02:23:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:06.257219 | orchestrator | 2026-01-01 02:23:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:09.307844 | orchestrator | 2026-01-01 02:23:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:09.310452 | orchestrator | 2026-01-01 02:23:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:09.310533 | orchestrator | 2026-01-01 02:23:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:12.356518 | orchestrator | 2026-01-01 02:23:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:12.358337 | orchestrator | 2026-01-01 02:23:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:12.358454 | orchestrator | 2026-01-01 02:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:15.406205 | orchestrator | 2026-01-01 02:23:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:15.407745 | orchestrator | 2026-01-01 02:23:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:15.407814 | orchestrator | 2026-01-01 02:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:18.458113 | orchestrator | 2026-01-01 02:23:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:18.459357 | orchestrator | 2026-01-01 02:23:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:18.459402 | orchestrator | 2026-01-01 02:23:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:21.509858 | orchestrator | 2026-01-01 02:23:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:21.511402 | orchestrator | 2026-01-01 02:23:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:21.511556 | orchestrator | 2026-01-01 02:23:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:24.558765 | orchestrator | 2026-01-01 02:23:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:24.560916 | orchestrator | 2026-01-01 02:23:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:24.560967 | orchestrator | 2026-01-01 02:23:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:27.610940 | orchestrator | 2026-01-01 02:23:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:27.613757 | orchestrator | 2026-01-01 02:23:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:27.613812 | orchestrator | 2026-01-01 02:23:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:30.663756 | orchestrator | 2026-01-01 02:23:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:30.665821 | orchestrator | 2026-01-01 02:23:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:30.665857 | orchestrator | 2026-01-01 02:23:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:33.711203 | orchestrator | 2026-01-01 02:23:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:33.713655 | orchestrator | 2026-01-01 02:23:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:33.713706 | orchestrator | 2026-01-01 02:23:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:36.766786 | orchestrator | 2026-01-01 02:23:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:36.768737 | orchestrator | 2026-01-01 02:23:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:36.768839 | orchestrator | 2026-01-01 02:23:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:39.819968 | orchestrator | 2026-01-01 02:23:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:39.821182 | orchestrator | 2026-01-01 02:23:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:39.821589 | orchestrator | 2026-01-01 02:23:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:42.871142 | orchestrator | 2026-01-01 02:23:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:42.872834 | orchestrator | 2026-01-01 02:23:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:42.872871 | orchestrator | 2026-01-01 02:23:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:45.924125 | orchestrator | 2026-01-01 02:23:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:45.925950 | orchestrator | 2026-01-01 02:23:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:45.925987 | orchestrator | 2026-01-01 02:23:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:48.977478 | orchestrator | 2026-01-01 02:23:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:48.979020 | orchestrator | 2026-01-01 02:23:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:48.979673 | orchestrator | 2026-01-01 02:23:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:52.033723 | orchestrator | 2026-01-01 02:23:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:52.036157 | orchestrator | 2026-01-01 02:23:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:52.036187 | orchestrator | 2026-01-01 02:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:55.085829 | orchestrator | 2026-01-01 02:23:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:55.091265 | orchestrator | 2026-01-01 02:23:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:55.091417 | orchestrator | 2026-01-01 02:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:58.143657 | orchestrator | 2026-01-01 02:23:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:23:58.146282 | orchestrator | 2026-01-01 02:23:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:23:58.146339 | orchestrator | 2026-01-01 02:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:01.193054 | orchestrator | 2026-01-01 02:24:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:01.195009 | orchestrator | 2026-01-01 02:24:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:01.195059 | orchestrator | 2026-01-01 02:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:04.245585 | orchestrator | 2026-01-01 02:24:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:04.246874 | orchestrator | 2026-01-01 02:24:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:04.246914 | orchestrator | 2026-01-01 02:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:07.304491 | orchestrator | 2026-01-01 02:24:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:07.305993 | orchestrator | 2026-01-01 02:24:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:07.306048 | orchestrator | 2026-01-01 02:24:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:10.356116 | orchestrator | 2026-01-01 02:24:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:10.357339 | orchestrator | 2026-01-01 02:24:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:10.357607 | orchestrator | 2026-01-01 02:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:13.405241 | orchestrator | 2026-01-01 02:24:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:13.407123 | orchestrator | 2026-01-01 02:24:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:13.407179 | orchestrator | 2026-01-01 02:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:16.458298 | orchestrator | 2026-01-01 02:24:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:16.460756 | orchestrator | 2026-01-01 02:24:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:16.460822 | orchestrator | 2026-01-01 02:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:19.500099 | orchestrator | 2026-01-01 02:24:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:19.500647 | orchestrator | 2026-01-01 02:24:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:19.500687 | orchestrator | 2026-01-01 02:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:22.549931 | orchestrator | 2026-01-01 02:24:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:22.552392 | orchestrator | 2026-01-01 02:24:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:22.552537 | orchestrator | 2026-01-01 02:24:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:25.605336 | orchestrator | 2026-01-01 02:24:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:25.606876 | orchestrator | 2026-01-01 02:24:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:25.606949 | orchestrator | 2026-01-01 02:24:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:28.656764 | orchestrator | 2026-01-01 02:24:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:28.659404 | orchestrator | 2026-01-01 02:24:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:28.659479 | orchestrator | 2026-01-01 02:24:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:31.710614 | orchestrator | 2026-01-01 02:24:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:31.714320 | orchestrator | 2026-01-01 02:24:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:31.714415 | orchestrator | 2026-01-01 02:24:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:34.763945 | orchestrator | 2026-01-01 02:24:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:34.766941 | orchestrator | 2026-01-01 02:24:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:34.767009 | orchestrator | 2026-01-01 02:24:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:37.816159 | orchestrator | 2026-01-01 02:24:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:37.820282 | orchestrator | 2026-01-01 02:24:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:37.820330 | orchestrator | 2026-01-01 02:24:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:40.872036 | orchestrator | 2026-01-01 02:24:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:40.874501 | orchestrator | 2026-01-01 02:24:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:40.874568 | orchestrator | 2026-01-01 02:24:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:43.925450 | orchestrator | 2026-01-01 02:24:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:43.927945 | orchestrator | 2026-01-01 02:24:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:43.928073 | orchestrator | 2026-01-01 02:24:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:46.974334 | orchestrator | 2026-01-01 02:24:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:46.978213 | orchestrator | 2026-01-01 02:24:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:46.978291 | orchestrator | 2026-01-01 02:24:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:50.027265 | orchestrator | 2026-01-01 02:24:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:50.030266 | orchestrator | 2026-01-01 02:24:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:50.030332 | orchestrator | 2026-01-01 02:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:53.070830 | orchestrator | 2026-01-01 02:24:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:53.073848 | orchestrator | 2026-01-01 02:24:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:53.073881 | orchestrator | 2026-01-01 02:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:56.121092 | orchestrator | 2026-01-01 02:24:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:56.121849 | orchestrator | 2026-01-01 02:24:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:56.121880 | orchestrator | 2026-01-01 02:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:59.174059 | orchestrator | 2026-01-01 02:24:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:24:59.176306 | orchestrator | 2026-01-01 02:24:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:24:59.176391 | orchestrator | 2026-01-01 02:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:02.224323 | orchestrator | 2026-01-01 02:25:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:02.228331 | orchestrator | 2026-01-01 02:25:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:02.228391 | orchestrator | 2026-01-01 02:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:05.276925 | orchestrator | 2026-01-01 02:25:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:05.279832 | orchestrator | 2026-01-01 02:25:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:05.279916 | orchestrator | 2026-01-01 02:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:08.334194 | orchestrator | 2026-01-01 02:25:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:08.334541 | orchestrator | 2026-01-01 02:25:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:08.334572 | orchestrator | 2026-01-01 02:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:11.378711 | orchestrator | 2026-01-01 02:25:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:11.381526 | orchestrator | 2026-01-01 02:25:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:11.381579 | orchestrator | 2026-01-01 02:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:14.432662 | orchestrator | 2026-01-01 02:25:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:14.433980 | orchestrator | 2026-01-01 02:25:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:14.434067 | orchestrator | 2026-01-01 02:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:17.492815 | orchestrator | 2026-01-01 02:25:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:17.494805 | orchestrator | 2026-01-01 02:25:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:17.494898 | orchestrator | 2026-01-01 02:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:20.541484 | orchestrator | 2026-01-01 02:25:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:20.544727 | orchestrator | 2026-01-01 02:25:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:20.544792 | orchestrator | 2026-01-01 02:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:23.586534 | orchestrator | 2026-01-01 02:25:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:23.587397 | orchestrator | 2026-01-01 02:25:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:23.587617 | orchestrator | 2026-01-01 02:25:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:26.622606 | orchestrator | 2026-01-01 02:25:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:26.624163 | orchestrator | 2026-01-01 02:25:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:26.624206 | orchestrator | 2026-01-01 02:25:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:29.670411 | orchestrator | 2026-01-01 02:25:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:29.673203 | orchestrator | 2026-01-01 02:25:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:29.673259 | orchestrator | 2026-01-01 02:25:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:32.711386 | orchestrator | 2026-01-01 02:25:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:32.713395 | orchestrator | 2026-01-01 02:25:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:32.713536 | orchestrator | 2026-01-01 02:25:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:35.754785 | orchestrator | 2026-01-01 02:25:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:35.756242 | orchestrator | 2026-01-01 02:25:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:35.756290 | orchestrator | 2026-01-01 02:25:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:38.798727 | orchestrator | 2026-01-01 02:25:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:38.800982 | orchestrator | 2026-01-01 02:25:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:38.801055 | orchestrator | 2026-01-01 02:25:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:41.844137 | orchestrator | 2026-01-01 02:25:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:41.847372 | orchestrator | 2026-01-01 02:25:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:41.847567 | orchestrator | 2026-01-01 02:25:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:44.895624 | orchestrator | 2026-01-01 02:25:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:44.896225 | orchestrator | 2026-01-01 02:25:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:44.896261 | orchestrator | 2026-01-01 02:25:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:47.938506 | orchestrator | 2026-01-01 02:25:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:47.941887 | orchestrator | 2026-01-01 02:25:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:47.941970 | orchestrator | 2026-01-01 02:25:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:50.992864 | orchestrator | 2026-01-01 02:25:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:50.995888 | orchestrator | 2026-01-01 02:25:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:50.995928 | orchestrator | 2026-01-01 02:25:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:54.047504 | orchestrator | 2026-01-01 02:25:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:54.048954 | orchestrator | 2026-01-01 02:25:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:54.049002 | orchestrator | 2026-01-01 02:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:57.090665 | orchestrator | 2026-01-01 02:25:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:25:57.091583 | orchestrator | 2026-01-01 02:25:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:25:57.091625 | orchestrator | 2026-01-01 02:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:00.148818 | orchestrator | 2026-01-01 02:26:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:00.149902 | orchestrator | 2026-01-01 02:26:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:00.150098 | orchestrator | 2026-01-01 02:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:03.190292 | orchestrator | 2026-01-01 02:26:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:03.190937 | orchestrator | 2026-01-01 02:26:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:03.190968 | orchestrator | 2026-01-01 02:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:06.239296 | orchestrator | 2026-01-01 02:26:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:06.240944 | orchestrator | 2026-01-01 02:26:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:06.240999 | orchestrator | 2026-01-01 02:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:09.285673 | orchestrator | 2026-01-01 02:26:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:09.286771 | orchestrator | 2026-01-01 02:26:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:09.286836 | orchestrator | 2026-01-01 02:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:12.322150 | orchestrator | 2026-01-01 02:26:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:12.322720 | orchestrator | 2026-01-01 02:26:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:12.322739 | orchestrator | 2026-01-01 02:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:15.364942 | orchestrator | 2026-01-01 02:26:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:15.366160 | orchestrator | 2026-01-01 02:26:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:15.366620 | orchestrator | 2026-01-01 02:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:18.410737 | orchestrator | 2026-01-01 02:26:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:18.414347 | orchestrator | 2026-01-01 02:26:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:18.415286 | orchestrator | 2026-01-01 02:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:21.471663 | orchestrator | 2026-01-01 02:26:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:21.472890 | orchestrator | 2026-01-01 02:26:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:21.473203 | orchestrator | 2026-01-01 02:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:24.506741 | orchestrator | 2026-01-01 02:26:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:24.507279 | orchestrator | 2026-01-01 02:26:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:24.507308 | orchestrator | 2026-01-01 02:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:27.550549 | orchestrator | 2026-01-01 02:26:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:27.552401 | orchestrator | 2026-01-01 02:26:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:27.552493 | orchestrator | 2026-01-01 02:26:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:30.599476 | orchestrator | 2026-01-01 02:26:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:30.601935 | orchestrator | 2026-01-01 02:26:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:30.602147 | orchestrator | 2026-01-01 02:26:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:33.655541 | orchestrator | 2026-01-01 02:26:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:33.657072 | orchestrator | 2026-01-01 02:26:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:33.657221 | orchestrator | 2026-01-01 02:26:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:36.710497 | orchestrator | 2026-01-01 02:26:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:36.713314 | orchestrator | 2026-01-01 02:26:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:36.713334 | orchestrator | 2026-01-01 02:26:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:39.766746 | orchestrator | 2026-01-01 02:26:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:39.768124 | orchestrator | 2026-01-01 02:26:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:39.768161 | orchestrator | 2026-01-01 02:26:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:42.815803 | orchestrator | 2026-01-01 02:26:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:42.818615 | orchestrator | 2026-01-01 02:26:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:42.818675 | orchestrator | 2026-01-01 02:26:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:45.870988 | orchestrator | 2026-01-01 02:26:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:45.873269 | orchestrator | 2026-01-01 02:26:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:45.873553 | orchestrator | 2026-01-01 02:26:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:48.929721 | orchestrator | 2026-01-01 02:26:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:48.931278 | orchestrator | 2026-01-01 02:26:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:48.931313 | orchestrator | 2026-01-01 02:26:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:51.979352 | orchestrator | 2026-01-01 02:26:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:51.980990 | orchestrator | 2026-01-01 02:26:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:51.981186 | orchestrator | 2026-01-01 02:26:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:55.029478 | orchestrator | 2026-01-01 02:26:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:55.030848 | orchestrator | 2026-01-01 02:26:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:55.030945 | orchestrator | 2026-01-01 02:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:58.078938 | orchestrator | 2026-01-01 02:26:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:26:58.082365 | orchestrator | 2026-01-01 02:26:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:26:58.082466 | orchestrator | 2026-01-01 02:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:01.136050 | orchestrator | 2026-01-01 02:27:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:01.139143 | orchestrator | 2026-01-01 02:27:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:01.139427 | orchestrator | 2026-01-01 02:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:04.185526 | orchestrator | 2026-01-01 02:27:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:04.187978 | orchestrator | 2026-01-01 02:27:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:04.188066 | orchestrator | 2026-01-01 02:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:07.239850 | orchestrator | 2026-01-01 02:27:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:07.241826 | orchestrator | 2026-01-01 02:27:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:07.241878 | orchestrator | 2026-01-01 02:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:10.297134 | orchestrator | 2026-01-01 02:27:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:10.298607 | orchestrator | 2026-01-01 02:27:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:10.298748 | orchestrator | 2026-01-01 02:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:13.344336 | orchestrator | 2026-01-01 02:27:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:13.346889 | orchestrator | 2026-01-01 02:27:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:13.346933 | orchestrator | 2026-01-01 02:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:16.390731 | orchestrator | 2026-01-01 02:27:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:16.392267 | orchestrator | 2026-01-01 02:27:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:16.392325 | orchestrator | 2026-01-01 02:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:19.434867 | orchestrator | 2026-01-01 02:27:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:19.436852 | orchestrator | 2026-01-01 02:27:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:19.437205 | orchestrator | 2026-01-01 02:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:22.491863 | orchestrator | 2026-01-01 02:27:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:22.493146 | orchestrator | 2026-01-01 02:27:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:22.493247 | orchestrator | 2026-01-01 02:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:25.540989 | orchestrator | 2026-01-01 02:27:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:25.541779 | orchestrator | 2026-01-01 02:27:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:25.541815 | orchestrator | 2026-01-01 02:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:28.596955 | orchestrator | 2026-01-01 02:27:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:28.597915 | orchestrator | 2026-01-01 02:27:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:28.598000 | orchestrator | 2026-01-01 02:27:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:31.652075 | orchestrator | 2026-01-01 02:27:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:31.654559 | orchestrator | 2026-01-01 02:27:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:31.654729 | orchestrator | 2026-01-01 02:27:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:34.704677 | orchestrator | 2026-01-01 02:27:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:34.707784 | orchestrator | 2026-01-01 02:27:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:34.707876 | orchestrator | 2026-01-01 02:27:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:37.761093 | orchestrator | 2026-01-01 02:27:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:37.762442 | orchestrator | 2026-01-01 02:27:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:37.762508 | orchestrator | 2026-01-01 02:27:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:40.811602 | orchestrator | 2026-01-01 02:27:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:40.814339 | orchestrator | 2026-01-01 02:27:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:40.814442 | orchestrator | 2026-01-01 02:27:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:43.867709 | orchestrator | 2026-01-01 02:27:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:43.869776 | orchestrator | 2026-01-01 02:27:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:43.869810 | orchestrator | 2026-01-01 02:27:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:46.927918 | orchestrator | 2026-01-01 02:27:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:46.932268 | orchestrator | 2026-01-01 02:27:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:46.932322 | orchestrator | 2026-01-01 02:27:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:49.982562 | orchestrator | 2026-01-01 02:27:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:49.985052 | orchestrator | 2026-01-01 02:27:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:49.985277 | orchestrator | 2026-01-01 02:27:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:53.039729 | orchestrator | 2026-01-01 02:27:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:53.041408 | orchestrator | 2026-01-01 02:27:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:53.041565 | orchestrator | 2026-01-01 02:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:56.088646 | orchestrator | 2026-01-01 02:27:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:56.090227 | orchestrator | 2026-01-01 02:27:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:56.090259 | orchestrator | 2026-01-01 02:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:59.148513 | orchestrator | 2026-01-01 02:27:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:27:59.150051 | orchestrator | 2026-01-01 02:27:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:27:59.150084 | orchestrator | 2026-01-01 02:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:02.196428 | orchestrator | 2026-01-01 02:28:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:02.198262 | orchestrator | 2026-01-01 02:28:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:02.198299 | orchestrator | 2026-01-01 02:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:05.251562 | orchestrator | 2026-01-01 02:28:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:05.254207 | orchestrator | 2026-01-01 02:28:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:05.254328 | orchestrator | 2026-01-01 02:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:08.301855 | orchestrator | 2026-01-01 02:28:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:08.304104 | orchestrator | 2026-01-01 02:28:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:08.304672 | orchestrator | 2026-01-01 02:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:11.359202 | orchestrator | 2026-01-01 02:28:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:11.361226 | orchestrator | 2026-01-01 02:28:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:11.361411 | orchestrator | 2026-01-01 02:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:14.420326 | orchestrator | 2026-01-01 02:28:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:14.421936 | orchestrator | 2026-01-01 02:28:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:14.421965 | orchestrator | 2026-01-01 02:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:17.468087 | orchestrator | 2026-01-01 02:28:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:17.469294 | orchestrator | 2026-01-01 02:28:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:17.469308 | orchestrator | 2026-01-01 02:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:20.522065 | orchestrator | 2026-01-01 02:28:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:20.523524 | orchestrator | 2026-01-01 02:28:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:20.523539 | orchestrator | 2026-01-01 02:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:23.569219 | orchestrator | 2026-01-01 02:28:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:23.572650 | orchestrator | 2026-01-01 02:28:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:23.572724 | orchestrator | 2026-01-01 02:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:26.620445 | orchestrator | 2026-01-01 02:28:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:26.621738 | orchestrator | 2026-01-01 02:28:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:26.621929 | orchestrator | 2026-01-01 02:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:29.676106 | orchestrator | 2026-01-01 02:28:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:29.677706 | orchestrator | 2026-01-01 02:28:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:29.677753 | orchestrator | 2026-01-01 02:28:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:32.731018 | orchestrator | 2026-01-01 02:28:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:32.732994 | orchestrator | 2026-01-01 02:28:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:32.733067 | orchestrator | 2026-01-01 02:28:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:35.781975 | orchestrator | 2026-01-01 02:28:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:35.783540 | orchestrator | 2026-01-01 02:28:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:35.783575 | orchestrator | 2026-01-01 02:28:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:38.835553 | orchestrator | 2026-01-01 02:28:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:38.837916 | orchestrator | 2026-01-01 02:28:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:38.837963 | orchestrator | 2026-01-01 02:28:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:41.888504 | orchestrator | 2026-01-01 02:28:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:41.891325 | orchestrator | 2026-01-01 02:28:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:41.891453 | orchestrator | 2026-01-01 02:28:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:44.946088 | orchestrator | 2026-01-01 02:28:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:44.947706 | orchestrator | 2026-01-01 02:28:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:44.947737 | orchestrator | 2026-01-01 02:28:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:47.997906 | orchestrator | 2026-01-01 02:28:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:48.001288 | orchestrator | 2026-01-01 02:28:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:48.001395 | orchestrator | 2026-01-01 02:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:51.047597 | orchestrator | 2026-01-01 02:28:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:51.048648 | orchestrator | 2026-01-01 02:28:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:51.048710 | orchestrator | 2026-01-01 02:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:54.088996 | orchestrator | 2026-01-01 02:28:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:54.091217 | orchestrator | 2026-01-01 02:28:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:54.091261 | orchestrator | 2026-01-01 02:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:57.134072 | orchestrator | 2026-01-01 02:28:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:28:57.134753 | orchestrator | 2026-01-01 02:28:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:28:57.134841 | orchestrator | 2026-01-01 02:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:00.180205 | orchestrator | 2026-01-01 02:29:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:00.181923 | orchestrator | 2026-01-01 02:29:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:00.182074 | orchestrator | 2026-01-01 02:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:03.229243 | orchestrator | 2026-01-01 02:29:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:03.230876 | orchestrator | 2026-01-01 02:29:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:03.230928 | orchestrator | 2026-01-01 02:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:06.280010 | orchestrator | 2026-01-01 02:29:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:06.281711 | orchestrator | 2026-01-01 02:29:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:06.281785 | orchestrator | 2026-01-01 02:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:09.330001 | orchestrator | 2026-01-01 02:29:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:09.331526 | orchestrator | 2026-01-01 02:29:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:09.331595 | orchestrator | 2026-01-01 02:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:12.381012 | orchestrator | 2026-01-01 02:29:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:12.382258 | orchestrator | 2026-01-01 02:29:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:12.382706 | orchestrator | 2026-01-01 02:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:15.431984 | orchestrator | 2026-01-01 02:29:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:15.432699 | orchestrator | 2026-01-01 02:29:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:15.432752 | orchestrator | 2026-01-01 02:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:18.490581 | orchestrator | 2026-01-01 02:29:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:18.492354 | orchestrator | 2026-01-01 02:29:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:18.492447 | orchestrator | 2026-01-01 02:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:21.538706 | orchestrator | 2026-01-01 02:29:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:21.540122 | orchestrator | 2026-01-01 02:29:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:21.540174 | orchestrator | 2026-01-01 02:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:24.594862 | orchestrator | 2026-01-01 02:29:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:24.597335 | orchestrator | 2026-01-01 02:29:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:24.597527 | orchestrator | 2026-01-01 02:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:27.652033 | orchestrator | 2026-01-01 02:29:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:27.654268 | orchestrator | 2026-01-01 02:29:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:27.654330 | orchestrator | 2026-01-01 02:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:30.705628 | orchestrator | 2026-01-01 02:29:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:30.706594 | orchestrator | 2026-01-01 02:29:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:30.706633 | orchestrator | 2026-01-01 02:29:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:33.755792 | orchestrator | 2026-01-01 02:29:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:33.757042 | orchestrator | 2026-01-01 02:29:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:33.757076 | orchestrator | 2026-01-01 02:29:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:36.805859 | orchestrator | 2026-01-01 02:29:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:36.808182 | orchestrator | 2026-01-01 02:29:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:36.808222 | orchestrator | 2026-01-01 02:29:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:39.857847 | orchestrator | 2026-01-01 02:29:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:39.861982 | orchestrator | 2026-01-01 02:29:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:39.862069 | orchestrator | 2026-01-01 02:29:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:42.915091 | orchestrator | 2026-01-01 02:29:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:42.917202 | orchestrator | 2026-01-01 02:29:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:42.917268 | orchestrator | 2026-01-01 02:29:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:45.967055 | orchestrator | 2026-01-01 02:29:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:45.967731 | orchestrator | 2026-01-01 02:29:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:45.967769 | orchestrator | 2026-01-01 02:29:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:49.018722 | orchestrator | 2026-01-01 02:29:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:49.020623 | orchestrator | 2026-01-01 02:29:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:49.020648 | orchestrator | 2026-01-01 02:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:52.063989 | orchestrator | 2026-01-01 02:29:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:52.065267 | orchestrator | 2026-01-01 02:29:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:52.065346 | orchestrator | 2026-01-01 02:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:55.120506 | orchestrator | 2026-01-01 02:29:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:55.123318 | orchestrator | 2026-01-01 02:29:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:55.123439 | orchestrator | 2026-01-01 02:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:58.176543 | orchestrator | 2026-01-01 02:29:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:29:58.179960 | orchestrator | 2026-01-01 02:29:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:29:58.180029 | orchestrator | 2026-01-01 02:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:01.242662 | orchestrator | 2026-01-01 02:30:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:01.244584 | orchestrator | 2026-01-01 02:30:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:01.244663 | orchestrator | 2026-01-01 02:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:04.295929 | orchestrator | 2026-01-01 02:30:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:04.300147 | orchestrator | 2026-01-01 02:30:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:04.300309 | orchestrator | 2026-01-01 02:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:07.355028 | orchestrator | 2026-01-01 02:30:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:07.357191 | orchestrator | 2026-01-01 02:30:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:07.357281 | orchestrator | 2026-01-01 02:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:10.401155 | orchestrator | 2026-01-01 02:30:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:10.401232 | orchestrator | 2026-01-01 02:30:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:10.402094 | orchestrator | 2026-01-01 02:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:13.452886 | orchestrator | 2026-01-01 02:30:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:13.454727 | orchestrator | 2026-01-01 02:30:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:13.454811 | orchestrator | 2026-01-01 02:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:16.510709 | orchestrator | 2026-01-01 02:30:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:16.511601 | orchestrator | 2026-01-01 02:30:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:16.511762 | orchestrator | 2026-01-01 02:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:19.571532 | orchestrator | 2026-01-01 02:30:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:19.576752 | orchestrator | 2026-01-01 02:30:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:19.576810 | orchestrator | 2026-01-01 02:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:22.631129 | orchestrator | 2026-01-01 02:30:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:22.632158 | orchestrator | 2026-01-01 02:30:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:22.632376 | orchestrator | 2026-01-01 02:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:25.679816 | orchestrator | 2026-01-01 02:30:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:25.680758 | orchestrator | 2026-01-01 02:30:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:25.680830 | orchestrator | 2026-01-01 02:30:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:28.731600 | orchestrator | 2026-01-01 02:30:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:28.733018 | orchestrator | 2026-01-01 02:30:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:28.733053 | orchestrator | 2026-01-01 02:30:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:31.794975 | orchestrator | 2026-01-01 02:30:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:31.798331 | orchestrator | 2026-01-01 02:30:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:31.798442 | orchestrator | 2026-01-01 02:30:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:34.851902 | orchestrator | 2026-01-01 02:30:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:34.853619 | orchestrator | 2026-01-01 02:30:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:34.853707 | orchestrator | 2026-01-01 02:30:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:37.902426 | orchestrator | 2026-01-01 02:30:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:37.903075 | orchestrator | 2026-01-01 02:30:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:37.903153 | orchestrator | 2026-01-01 02:30:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:40.954287 | orchestrator | 2026-01-01 02:30:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:40.955768 | orchestrator | 2026-01-01 02:30:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:40.955799 | orchestrator | 2026-01-01 02:30:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:44.011111 | orchestrator | 2026-01-01 02:30:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:44.013024 | orchestrator | 2026-01-01 02:30:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:44.013119 | orchestrator | 2026-01-01 02:30:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:47.061615 | orchestrator | 2026-01-01 02:30:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:47.063158 | orchestrator | 2026-01-01 02:30:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:47.063223 | orchestrator | 2026-01-01 02:30:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:50.114133 | orchestrator | 2026-01-01 02:30:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:50.117653 | orchestrator | 2026-01-01 02:30:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:50.118720 | orchestrator | 2026-01-01 02:30:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:53.171941 | orchestrator | 2026-01-01 02:30:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:53.173037 | orchestrator | 2026-01-01 02:30:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:53.173094 | orchestrator | 2026-01-01 02:30:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:56.223500 | orchestrator | 2026-01-01 02:30:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:56.226193 | orchestrator | 2026-01-01 02:30:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:56.226249 | orchestrator | 2026-01-01 02:30:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:59.284697 | orchestrator | 2026-01-01 02:30:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:30:59.286677 | orchestrator | 2026-01-01 02:30:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:30:59.286757 | orchestrator | 2026-01-01 02:30:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:02.336889 | orchestrator | 2026-01-01 02:31:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:02.337910 | orchestrator | 2026-01-01 02:31:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:02.337998 | orchestrator | 2026-01-01 02:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:05.395080 | orchestrator | 2026-01-01 02:31:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:05.396719 | orchestrator | 2026-01-01 02:31:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:05.396766 | orchestrator | 2026-01-01 02:31:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:08.451475 | orchestrator | 2026-01-01 02:31:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:08.453605 | orchestrator | 2026-01-01 02:31:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:08.453697 | orchestrator | 2026-01-01 02:31:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:11.500141 | orchestrator | 2026-01-01 02:31:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:11.500692 | orchestrator | 2026-01-01 02:31:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:11.500738 | orchestrator | 2026-01-01 02:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:14.547178 | orchestrator | 2026-01-01 02:31:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:14.549108 | orchestrator | 2026-01-01 02:31:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:14.549158 | orchestrator | 2026-01-01 02:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:17.599468 | orchestrator | 2026-01-01 02:31:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:17.601588 | orchestrator | 2026-01-01 02:31:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:17.601622 | orchestrator | 2026-01-01 02:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:20.659570 | orchestrator | 2026-01-01 02:31:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:20.661559 | orchestrator | 2026-01-01 02:31:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:20.661598 | orchestrator | 2026-01-01 02:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:23.705361 | orchestrator | 2026-01-01 02:31:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:23.705593 | orchestrator | 2026-01-01 02:31:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:23.706179 | orchestrator | 2026-01-01 02:31:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:26.757947 | orchestrator | 2026-01-01 02:31:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:26.760222 | orchestrator | 2026-01-01 02:31:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:26.760275 | orchestrator | 2026-01-01 02:31:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:29.808265 | orchestrator | 2026-01-01 02:31:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:29.809569 | orchestrator | 2026-01-01 02:31:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:29.809722 | orchestrator | 2026-01-01 02:31:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:32.854538 | orchestrator | 2026-01-01 02:31:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:32.855584 | orchestrator | 2026-01-01 02:31:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:32.855602 | orchestrator | 2026-01-01 02:31:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:35.903831 | orchestrator | 2026-01-01 02:31:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:35.906490 | orchestrator | 2026-01-01 02:31:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:35.906568 | orchestrator | 2026-01-01 02:31:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:38.966287 | orchestrator | 2026-01-01 02:31:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:38.969067 | orchestrator | 2026-01-01 02:31:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:38.969161 | orchestrator | 2026-01-01 02:31:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:42.025169 | orchestrator | 2026-01-01 02:31:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:42.028705 | orchestrator | 2026-01-01 02:31:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:42.028757 | orchestrator | 2026-01-01 02:31:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:45.076455 | orchestrator | 2026-01-01 02:31:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:45.078680 | orchestrator | 2026-01-01 02:31:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:45.078736 | orchestrator | 2026-01-01 02:31:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:48.128993 | orchestrator | 2026-01-01 02:31:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:48.131508 | orchestrator | 2026-01-01 02:31:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:48.131555 | orchestrator | 2026-01-01 02:31:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:51.176596 | orchestrator | 2026-01-01 02:31:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:51.178740 | orchestrator | 2026-01-01 02:31:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:51.178821 | orchestrator | 2026-01-01 02:31:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:54.224041 | orchestrator | 2026-01-01 02:31:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:54.227132 | orchestrator | 2026-01-01 02:31:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:54.227175 | orchestrator | 2026-01-01 02:31:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:57.275015 | orchestrator | 2026-01-01 02:31:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:31:57.277306 | orchestrator | 2026-01-01 02:31:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:31:57.277433 | orchestrator | 2026-01-01 02:31:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:00.325057 | orchestrator | 2026-01-01 02:32:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:00.325665 | orchestrator | 2026-01-01 02:32:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:00.325682 | orchestrator | 2026-01-01 02:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:03.372756 | orchestrator | 2026-01-01 02:32:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:03.374229 | orchestrator | 2026-01-01 02:32:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:03.374430 | orchestrator | 2026-01-01 02:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:06.424433 | orchestrator | 2026-01-01 02:32:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:06.426730 | orchestrator | 2026-01-01 02:32:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:06.426832 | orchestrator | 2026-01-01 02:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:09.480241 | orchestrator | 2026-01-01 02:32:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:09.481266 | orchestrator | 2026-01-01 02:32:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:09.481482 | orchestrator | 2026-01-01 02:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:12.528712 | orchestrator | 2026-01-01 02:32:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:12.530357 | orchestrator | 2026-01-01 02:32:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:12.530475 | orchestrator | 2026-01-01 02:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:15.577662 | orchestrator | 2026-01-01 02:32:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:15.580655 | orchestrator | 2026-01-01 02:32:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:15.580749 | orchestrator | 2026-01-01 02:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:18.632267 | orchestrator | 2026-01-01 02:32:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:18.634525 | orchestrator | 2026-01-01 02:32:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:18.634589 | orchestrator | 2026-01-01 02:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:21.686368 | orchestrator | 2026-01-01 02:32:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:21.687170 | orchestrator | 2026-01-01 02:32:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:21.687336 | orchestrator | 2026-01-01 02:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:24.737489 | orchestrator | 2026-01-01 02:32:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:24.739705 | orchestrator | 2026-01-01 02:32:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:24.739802 | orchestrator | 2026-01-01 02:32:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:27.792272 | orchestrator | 2026-01-01 02:32:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:27.793931 | orchestrator | 2026-01-01 02:32:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:27.794104 | orchestrator | 2026-01-01 02:32:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:30.848075 | orchestrator | 2026-01-01 02:32:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:30.849610 | orchestrator | 2026-01-01 02:32:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:30.849645 | orchestrator | 2026-01-01 02:32:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:33.902524 | orchestrator | 2026-01-01 02:32:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:33.904087 | orchestrator | 2026-01-01 02:32:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:33.904147 | orchestrator | 2026-01-01 02:32:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:36.957058 | orchestrator | 2026-01-01 02:32:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:36.958256 | orchestrator | 2026-01-01 02:32:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:36.958468 | orchestrator | 2026-01-01 02:32:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:40.025507 | orchestrator | 2026-01-01 02:32:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:40.033905 | orchestrator | 2026-01-01 02:32:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:40.033998 | orchestrator | 2026-01-01 02:32:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:43.085991 | orchestrator | 2026-01-01 02:32:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:43.086879 | orchestrator | 2026-01-01 02:32:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:43.087040 | orchestrator | 2026-01-01 02:32:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:46.136176 | orchestrator | 2026-01-01 02:32:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:46.137332 | orchestrator | 2026-01-01 02:32:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:46.137419 | orchestrator | 2026-01-01 02:32:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:49.185350 | orchestrator | 2026-01-01 02:32:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:49.186495 | orchestrator | 2026-01-01 02:32:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:49.186526 | orchestrator | 2026-01-01 02:32:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:52.229648 | orchestrator | 2026-01-01 02:32:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:52.231830 | orchestrator | 2026-01-01 02:32:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:52.232326 | orchestrator | 2026-01-01 02:32:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:55.275729 | orchestrator | 2026-01-01 02:32:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:55.277947 | orchestrator | 2026-01-01 02:32:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:55.278010 | orchestrator | 2026-01-01 02:32:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:58.323644 | orchestrator | 2026-01-01 02:32:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:32:58.325449 | orchestrator | 2026-01-01 02:32:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:32:58.325483 | orchestrator | 2026-01-01 02:32:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:01.361932 | orchestrator | 2026-01-01 02:33:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:01.363698 | orchestrator | 2026-01-01 02:33:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:01.363744 | orchestrator | 2026-01-01 02:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:04.410254 | orchestrator | 2026-01-01 02:33:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:04.413066 | orchestrator | 2026-01-01 02:33:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:04.413115 | orchestrator | 2026-01-01 02:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:07.455264 | orchestrator | 2026-01-01 02:33:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:07.456763 | orchestrator | 2026-01-01 02:33:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:07.456819 | orchestrator | 2026-01-01 02:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:10.511829 | orchestrator | 2026-01-01 02:33:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:10.513590 | orchestrator | 2026-01-01 02:33:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:10.513641 | orchestrator | 2026-01-01 02:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:13.556169 | orchestrator | 2026-01-01 02:33:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:13.557820 | orchestrator | 2026-01-01 02:33:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:13.557854 | orchestrator | 2026-01-01 02:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:16.605734 | orchestrator | 2026-01-01 02:33:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:16.606683 | orchestrator | 2026-01-01 02:33:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:16.606734 | orchestrator | 2026-01-01 02:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:19.658997 | orchestrator | 2026-01-01 02:33:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:19.660972 | orchestrator | 2026-01-01 02:33:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:19.661126 | orchestrator | 2026-01-01 02:33:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:22.708735 | orchestrator | 2026-01-01 02:33:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:22.710760 | orchestrator | 2026-01-01 02:33:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:22.710830 | orchestrator | 2026-01-01 02:33:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:25.759169 | orchestrator | 2026-01-01 02:33:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:25.760816 | orchestrator | 2026-01-01 02:33:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:25.760919 | orchestrator | 2026-01-01 02:33:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:28.807541 | orchestrator | 2026-01-01 02:33:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:28.808622 | orchestrator | 2026-01-01 02:33:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:28.808641 | orchestrator | 2026-01-01 02:33:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:31.853391 | orchestrator | 2026-01-01 02:33:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:31.855368 | orchestrator | 2026-01-01 02:33:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:31.855417 | orchestrator | 2026-01-01 02:33:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:34.903030 | orchestrator | 2026-01-01 02:33:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:34.904979 | orchestrator | 2026-01-01 02:33:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:34.905128 | orchestrator | 2026-01-01 02:33:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:37.960993 | orchestrator | 2026-01-01 02:33:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:37.963238 | orchestrator | 2026-01-01 02:33:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:37.963470 | orchestrator | 2026-01-01 02:33:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:41.028883 | orchestrator | 2026-01-01 02:33:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:41.030848 | orchestrator | 2026-01-01 02:33:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:41.030923 | orchestrator | 2026-01-01 02:33:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:44.077201 | orchestrator | 2026-01-01 02:33:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:44.078974 | orchestrator | 2026-01-01 02:33:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:44.079319 | orchestrator | 2026-01-01 02:33:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:47.131482 | orchestrator | 2026-01-01 02:33:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:47.134635 | orchestrator | 2026-01-01 02:33:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:47.134706 | orchestrator | 2026-01-01 02:33:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:50.178825 | orchestrator | 2026-01-01 02:33:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:50.179778 | orchestrator | 2026-01-01 02:33:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:50.179825 | orchestrator | 2026-01-01 02:33:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:53.233587 | orchestrator | 2026-01-01 02:33:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:53.235618 | orchestrator | 2026-01-01 02:33:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:53.235696 | orchestrator | 2026-01-01 02:33:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:56.279026 | orchestrator | 2026-01-01 02:33:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:56.280348 | orchestrator | 2026-01-01 02:33:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:56.280388 | orchestrator | 2026-01-01 02:33:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:59.331636 | orchestrator | 2026-01-01 02:33:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:33:59.332806 | orchestrator | 2026-01-01 02:33:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:33:59.332846 | orchestrator | 2026-01-01 02:33:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:02.377492 | orchestrator | 2026-01-01 02:34:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:02.380203 | orchestrator | 2026-01-01 02:34:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:02.380276 | orchestrator | 2026-01-01 02:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:05.423474 | orchestrator | 2026-01-01 02:34:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:05.425071 | orchestrator | 2026-01-01 02:34:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:05.425126 | orchestrator | 2026-01-01 02:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:08.475734 | orchestrator | 2026-01-01 02:34:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:08.477223 | orchestrator | 2026-01-01 02:34:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:08.477305 | orchestrator | 2026-01-01 02:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:11.517608 | orchestrator | 2026-01-01 02:34:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:11.518386 | orchestrator | 2026-01-01 02:34:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:11.518491 | orchestrator | 2026-01-01 02:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:14.560826 | orchestrator | 2026-01-01 02:34:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:14.562442 | orchestrator | 2026-01-01 02:34:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:14.562560 | orchestrator | 2026-01-01 02:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:17.599923 | orchestrator | 2026-01-01 02:34:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:17.600991 | orchestrator | 2026-01-01 02:34:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:17.601411 | orchestrator | 2026-01-01 02:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:20.651067 | orchestrator | 2026-01-01 02:34:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:20.652627 | orchestrator | 2026-01-01 02:34:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:20.652682 | orchestrator | 2026-01-01 02:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:23.699943 | orchestrator | 2026-01-01 02:34:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:23.700551 | orchestrator | 2026-01-01 02:34:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:23.700594 | orchestrator | 2026-01-01 02:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:26.743754 | orchestrator | 2026-01-01 02:34:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:26.745518 | orchestrator | 2026-01-01 02:34:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:26.745558 | orchestrator | 2026-01-01 02:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:29.790073 | orchestrator | 2026-01-01 02:34:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:29.792455 | orchestrator | 2026-01-01 02:34:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:29.792570 | orchestrator | 2026-01-01 02:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:32.830962 | orchestrator | 2026-01-01 02:34:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:32.833047 | orchestrator | 2026-01-01 02:34:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:32.833123 | orchestrator | 2026-01-01 02:34:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:35.876997 | orchestrator | 2026-01-01 02:34:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:35.879023 | orchestrator | 2026-01-01 02:34:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:35.879139 | orchestrator | 2026-01-01 02:34:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:38.919754 | orchestrator | 2026-01-01 02:34:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:38.920875 | orchestrator | 2026-01-01 02:34:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:38.920927 | orchestrator | 2026-01-01 02:34:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:41.969881 | orchestrator | 2026-01-01 02:34:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:41.972214 | orchestrator | 2026-01-01 02:34:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:41.972345 | orchestrator | 2026-01-01 02:34:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:45.013681 | orchestrator | 2026-01-01 02:34:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:45.016132 | orchestrator | 2026-01-01 02:34:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:45.016198 | orchestrator | 2026-01-01 02:34:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:48.061194 | orchestrator | 2026-01-01 02:34:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:48.064377 | orchestrator | 2026-01-01 02:34:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:48.064573 | orchestrator | 2026-01-01 02:34:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:51.106560 | orchestrator | 2026-01-01 02:34:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:51.107680 | orchestrator | 2026-01-01 02:34:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:51.107702 | orchestrator | 2026-01-01 02:34:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:54.148424 | orchestrator | 2026-01-01 02:34:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:54.149869 | orchestrator | 2026-01-01 02:34:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:54.149944 | orchestrator | 2026-01-01 02:34:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:57.194608 | orchestrator | 2026-01-01 02:34:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:34:57.195405 | orchestrator | 2026-01-01 02:34:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:34:57.195438 | orchestrator | 2026-01-01 02:34:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:00.245901 | orchestrator | 2026-01-01 02:35:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:00.246967 | orchestrator | 2026-01-01 02:35:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:00.247002 | orchestrator | 2026-01-01 02:35:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:03.286276 | orchestrator | 2026-01-01 02:35:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:03.288132 | orchestrator | 2026-01-01 02:35:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:03.288243 | orchestrator | 2026-01-01 02:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:06.331714 | orchestrator | 2026-01-01 02:35:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:06.336392 | orchestrator | 2026-01-01 02:35:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:06.337191 | orchestrator | 2026-01-01 02:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:09.389604 | orchestrator | 2026-01-01 02:35:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:09.390961 | orchestrator | 2026-01-01 02:35:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:09.391009 | orchestrator | 2026-01-01 02:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:12.461085 | orchestrator | 2026-01-01 02:35:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:12.464160 | orchestrator | 2026-01-01 02:35:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:12.464520 | orchestrator | 2026-01-01 02:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:15.511983 | orchestrator | 2026-01-01 02:35:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:15.514148 | orchestrator | 2026-01-01 02:35:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:15.514293 | orchestrator | 2026-01-01 02:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:18.561493 | orchestrator | 2026-01-01 02:35:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:18.564358 | orchestrator | 2026-01-01 02:35:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:18.564402 | orchestrator | 2026-01-01 02:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:21.609431 | orchestrator | 2026-01-01 02:35:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:21.612946 | orchestrator | 2026-01-01 02:35:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:21.613068 | orchestrator | 2026-01-01 02:35:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:24.664454 | orchestrator | 2026-01-01 02:35:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:24.668051 | orchestrator | 2026-01-01 02:35:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:24.668122 | orchestrator | 2026-01-01 02:35:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:27.711059 | orchestrator | 2026-01-01 02:35:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:27.713267 | orchestrator | 2026-01-01 02:35:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:27.713329 | orchestrator | 2026-01-01 02:35:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:30.764461 | orchestrator | 2026-01-01 02:35:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:30.767036 | orchestrator | 2026-01-01 02:35:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:30.767245 | orchestrator | 2026-01-01 02:35:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:33.816834 | orchestrator | 2026-01-01 02:35:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:33.819380 | orchestrator | 2026-01-01 02:35:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:33.819436 | orchestrator | 2026-01-01 02:35:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:36.872259 | orchestrator | 2026-01-01 02:35:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:36.874626 | orchestrator | 2026-01-01 02:35:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:36.874730 | orchestrator | 2026-01-01 02:35:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:39.928623 | orchestrator | 2026-01-01 02:35:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:39.930715 | orchestrator | 2026-01-01 02:35:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:39.930925 | orchestrator | 2026-01-01 02:35:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:42.982925 | orchestrator | 2026-01-01 02:35:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:42.984071 | orchestrator | 2026-01-01 02:35:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:42.984111 | orchestrator | 2026-01-01 02:35:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:46.035060 | orchestrator | 2026-01-01 02:35:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:46.035846 | orchestrator | 2026-01-01 02:35:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:46.035868 | orchestrator | 2026-01-01 02:35:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:49.087148 | orchestrator | 2026-01-01 02:35:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:49.089310 | orchestrator | 2026-01-01 02:35:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:49.089349 | orchestrator | 2026-01-01 02:35:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:52.143975 | orchestrator | 2026-01-01 02:35:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:52.146555 | orchestrator | 2026-01-01 02:35:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:52.146625 | orchestrator | 2026-01-01 02:35:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:55.206271 | orchestrator | 2026-01-01 02:35:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:55.206576 | orchestrator | 2026-01-01 02:35:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:55.206596 | orchestrator | 2026-01-01 02:35:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:58.260732 | orchestrator | 2026-01-01 02:35:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:35:58.261468 | orchestrator | 2026-01-01 02:35:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:35:58.261506 | orchestrator | 2026-01-01 02:35:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:01.311603 | orchestrator | 2026-01-01 02:36:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:01.313664 | orchestrator | 2026-01-01 02:36:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:01.313691 | orchestrator | 2026-01-01 02:36:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:04.367051 | orchestrator | 2026-01-01 02:36:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:04.367776 | orchestrator | 2026-01-01 02:36:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:04.367823 | orchestrator | 2026-01-01 02:36:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:07.415142 | orchestrator | 2026-01-01 02:36:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:07.417541 | orchestrator | 2026-01-01 02:36:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:07.417581 | orchestrator | 2026-01-01 02:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:10.468921 | orchestrator | 2026-01-01 02:36:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:10.470874 | orchestrator | 2026-01-01 02:36:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:10.471024 | orchestrator | 2026-01-01 02:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:13.526995 | orchestrator | 2026-01-01 02:36:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:13.528778 | orchestrator | 2026-01-01 02:36:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:13.528817 | orchestrator | 2026-01-01 02:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:16.579773 | orchestrator | 2026-01-01 02:36:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:16.581542 | orchestrator | 2026-01-01 02:36:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:16.581585 | orchestrator | 2026-01-01 02:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:19.629373 | orchestrator | 2026-01-01 02:36:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:19.630811 | orchestrator | 2026-01-01 02:36:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:19.630870 | orchestrator | 2026-01-01 02:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:22.673309 | orchestrator | 2026-01-01 02:36:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:22.674442 | orchestrator | 2026-01-01 02:36:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:22.674612 | orchestrator | 2026-01-01 02:36:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:25.722860 | orchestrator | 2026-01-01 02:36:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:25.725568 | orchestrator | 2026-01-01 02:36:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:25.725643 | orchestrator | 2026-01-01 02:36:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:28.781124 | orchestrator | 2026-01-01 02:36:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:28.782105 | orchestrator | 2026-01-01 02:36:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:28.782132 | orchestrator | 2026-01-01 02:36:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:31.826101 | orchestrator | 2026-01-01 02:36:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:31.827486 | orchestrator | 2026-01-01 02:36:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:31.827527 | orchestrator | 2026-01-01 02:36:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:34.882203 | orchestrator | 2026-01-01 02:36:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:34.884621 | orchestrator | 2026-01-01 02:36:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:34.884684 | orchestrator | 2026-01-01 02:36:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:37.937425 | orchestrator | 2026-01-01 02:36:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:37.938566 | orchestrator | 2026-01-01 02:36:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:37.938621 | orchestrator | 2026-01-01 02:36:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:40.980112 | orchestrator | 2026-01-01 02:36:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:40.981345 | orchestrator | 2026-01-01 02:36:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:40.981421 | orchestrator | 2026-01-01 02:36:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:44.029455 | orchestrator | 2026-01-01 02:36:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:44.029695 | orchestrator | 2026-01-01 02:36:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:44.029720 | orchestrator | 2026-01-01 02:36:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:47.074716 | orchestrator | 2026-01-01 02:36:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:47.076553 | orchestrator | 2026-01-01 02:36:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:47.076593 | orchestrator | 2026-01-01 02:36:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:50.123112 | orchestrator | 2026-01-01 02:36:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:50.124377 | orchestrator | 2026-01-01 02:36:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:50.124481 | orchestrator | 2026-01-01 02:36:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:53.175772 | orchestrator | 2026-01-01 02:36:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:53.178626 | orchestrator | 2026-01-01 02:36:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:53.178684 | orchestrator | 2026-01-01 02:36:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:56.227316 | orchestrator | 2026-01-01 02:36:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:56.228083 | orchestrator | 2026-01-01 02:36:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:56.228109 | orchestrator | 2026-01-01 02:36:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:59.282480 | orchestrator | 2026-01-01 02:36:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:36:59.285860 | orchestrator | 2026-01-01 02:36:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:36:59.285916 | orchestrator | 2026-01-01 02:36:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:02.341005 | orchestrator | 2026-01-01 02:37:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:02.342640 | orchestrator | 2026-01-01 02:37:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:02.342694 | orchestrator | 2026-01-01 02:37:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:05.387817 | orchestrator | 2026-01-01 02:37:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:05.389643 | orchestrator | 2026-01-01 02:37:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:05.389821 | orchestrator | 2026-01-01 02:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:08.438871 | orchestrator | 2026-01-01 02:37:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:08.441050 | orchestrator | 2026-01-01 02:37:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:08.441096 | orchestrator | 2026-01-01 02:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:11.508039 | orchestrator | 2026-01-01 02:37:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:11.509427 | orchestrator | 2026-01-01 02:37:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:11.509719 | orchestrator | 2026-01-01 02:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:14.562110 | orchestrator | 2026-01-01 02:37:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:14.564030 | orchestrator | 2026-01-01 02:37:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:14.564107 | orchestrator | 2026-01-01 02:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:17.602234 | orchestrator | 2026-01-01 02:37:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:17.604288 | orchestrator | 2026-01-01 02:37:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:17.604334 | orchestrator | 2026-01-01 02:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:20.661219 | orchestrator | 2026-01-01 02:37:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:20.663619 | orchestrator | 2026-01-01 02:37:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:20.663783 | orchestrator | 2026-01-01 02:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:23.712109 | orchestrator | 2026-01-01 02:37:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:23.714795 | orchestrator | 2026-01-01 02:37:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:23.714834 | orchestrator | 2026-01-01 02:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:26.773001 | orchestrator | 2026-01-01 02:37:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:26.775137 | orchestrator | 2026-01-01 02:37:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:26.775302 | orchestrator | 2026-01-01 02:37:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:29.828644 | orchestrator | 2026-01-01 02:37:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:29.831577 | orchestrator | 2026-01-01 02:37:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:29.831667 | orchestrator | 2026-01-01 02:37:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:32.882080 | orchestrator | 2026-01-01 02:37:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:32.884847 | orchestrator | 2026-01-01 02:37:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:32.884902 | orchestrator | 2026-01-01 02:37:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:35.931051 | orchestrator | 2026-01-01 02:37:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:35.932016 | orchestrator | 2026-01-01 02:37:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:35.932103 | orchestrator | 2026-01-01 02:37:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:38.983048 | orchestrator | 2026-01-01 02:37:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:38.986409 | orchestrator | 2026-01-01 02:37:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:38.986536 | orchestrator | 2026-01-01 02:37:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:42.048872 | orchestrator | 2026-01-01 02:37:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:42.050661 | orchestrator | 2026-01-01 02:37:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:42.050695 | orchestrator | 2026-01-01 02:37:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:45.101877 | orchestrator | 2026-01-01 02:37:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:45.102939 | orchestrator | 2026-01-01 02:37:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:45.102982 | orchestrator | 2026-01-01 02:37:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:48.152658 | orchestrator | 2026-01-01 02:37:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:48.156325 | orchestrator | 2026-01-01 02:37:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:48.156374 | orchestrator | 2026-01-01 02:37:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:51.203136 | orchestrator | 2026-01-01 02:37:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:51.204669 | orchestrator | 2026-01-01 02:37:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:51.204717 | orchestrator | 2026-01-01 02:37:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:54.257790 | orchestrator | 2026-01-01 02:37:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:54.258888 | orchestrator | 2026-01-01 02:37:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:54.258920 | orchestrator | 2026-01-01 02:37:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:57.301760 | orchestrator | 2026-01-01 02:37:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:37:57.302852 | orchestrator | 2026-01-01 02:37:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:37:57.302925 | orchestrator | 2026-01-01 02:37:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:00.351960 | orchestrator | 2026-01-01 02:38:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:00.353688 | orchestrator | 2026-01-01 02:38:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:00.353726 | orchestrator | 2026-01-01 02:38:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:03.401820 | orchestrator | 2026-01-01 02:38:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:03.402924 | orchestrator | 2026-01-01 02:38:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:03.403120 | orchestrator | 2026-01-01 02:38:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:06.441363 | orchestrator | 2026-01-01 02:38:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:06.442972 | orchestrator | 2026-01-01 02:38:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:06.443002 | orchestrator | 2026-01-01 02:38:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:09.491309 | orchestrator | 2026-01-01 02:38:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:09.495650 | orchestrator | 2026-01-01 02:38:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:09.495735 | orchestrator | 2026-01-01 02:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:12.537748 | orchestrator | 2026-01-01 02:38:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:12.539408 | orchestrator | 2026-01-01 02:38:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:12.539465 | orchestrator | 2026-01-01 02:38:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:15.596360 | orchestrator | 2026-01-01 02:38:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:15.598538 | orchestrator | 2026-01-01 02:38:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:15.598602 | orchestrator | 2026-01-01 02:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:18.643299 | orchestrator | 2026-01-01 02:38:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:18.645931 | orchestrator | 2026-01-01 02:38:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:18.646088 | orchestrator | 2026-01-01 02:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:21.695398 | orchestrator | 2026-01-01 02:38:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:21.696744 | orchestrator | 2026-01-01 02:38:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:21.696777 | orchestrator | 2026-01-01 02:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:24.743821 | orchestrator | 2026-01-01 02:38:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:24.745787 | orchestrator | 2026-01-01 02:38:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:24.745823 | orchestrator | 2026-01-01 02:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:27.801731 | orchestrator | 2026-01-01 02:38:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:27.804298 | orchestrator | 2026-01-01 02:38:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:27.804333 | orchestrator | 2026-01-01 02:38:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:30.855105 | orchestrator | 2026-01-01 02:38:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:30.856917 | orchestrator | 2026-01-01 02:38:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:30.856977 | orchestrator | 2026-01-01 02:38:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:33.908888 | orchestrator | 2026-01-01 02:38:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:33.911444 | orchestrator | 2026-01-01 02:38:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:33.911511 | orchestrator | 2026-01-01 02:38:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:36.967106 | orchestrator | 2026-01-01 02:38:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:36.968703 | orchestrator | 2026-01-01 02:38:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:36.968741 | orchestrator | 2026-01-01 02:38:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:40.025964 | orchestrator | 2026-01-01 02:38:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:40.026233 | orchestrator | 2026-01-01 02:38:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:40.026251 | orchestrator | 2026-01-01 02:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:43.062604 | orchestrator | 2026-01-01 02:38:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:43.064406 | orchestrator | 2026-01-01 02:38:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:43.064476 | orchestrator | 2026-01-01 02:38:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:46.116785 | orchestrator | 2026-01-01 02:38:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:46.118475 | orchestrator | 2026-01-01 02:38:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:46.118521 | orchestrator | 2026-01-01 02:38:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:49.170409 | orchestrator | 2026-01-01 02:38:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:49.172230 | orchestrator | 2026-01-01 02:38:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:49.172305 | orchestrator | 2026-01-01 02:38:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:52.221553 | orchestrator | 2026-01-01 02:38:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:52.225277 | orchestrator | 2026-01-01 02:38:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:52.225340 | orchestrator | 2026-01-01 02:38:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:55.278567 | orchestrator | 2026-01-01 02:38:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:55.279887 | orchestrator | 2026-01-01 02:38:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:55.279926 | orchestrator | 2026-01-01 02:38:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:58.325585 | orchestrator | 2026-01-01 02:38:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:38:58.328048 | orchestrator | 2026-01-01 02:38:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:38:58.328124 | orchestrator | 2026-01-01 02:38:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:01.385389 | orchestrator | 2026-01-01 02:39:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:01.386930 | orchestrator | 2026-01-01 02:39:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:01.386984 | orchestrator | 2026-01-01 02:39:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:04.442233 | orchestrator | 2026-01-01 02:39:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:04.443544 | orchestrator | 2026-01-01 02:39:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:04.443590 | orchestrator | 2026-01-01 02:39:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:07.498926 | orchestrator | 2026-01-01 02:39:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:07.501323 | orchestrator | 2026-01-01 02:39:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:07.501382 | orchestrator | 2026-01-01 02:39:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:10.558105 | orchestrator | 2026-01-01 02:39:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:10.560168 | orchestrator | 2026-01-01 02:39:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:10.560406 | orchestrator | 2026-01-01 02:39:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:13.615166 | orchestrator | 2026-01-01 02:39:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:13.616797 | orchestrator | 2026-01-01 02:39:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:13.617035 | orchestrator | 2026-01-01 02:39:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:16.674197 | orchestrator | 2026-01-01 02:39:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:16.675891 | orchestrator | 2026-01-01 02:39:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:16.675936 | orchestrator | 2026-01-01 02:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:19.722608 | orchestrator | 2026-01-01 02:39:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:19.725585 | orchestrator | 2026-01-01 02:39:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:19.726342 | orchestrator | 2026-01-01 02:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:22.775069 | orchestrator | 2026-01-01 02:39:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:22.777314 | orchestrator | 2026-01-01 02:39:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:22.777335 | orchestrator | 2026-01-01 02:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:25.826905 | orchestrator | 2026-01-01 02:39:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:25.828976 | orchestrator | 2026-01-01 02:39:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:25.829021 | orchestrator | 2026-01-01 02:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:28.877049 | orchestrator | 2026-01-01 02:39:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:28.880408 | orchestrator | 2026-01-01 02:39:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:28.880464 | orchestrator | 2026-01-01 02:39:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:31.919814 | orchestrator | 2026-01-01 02:39:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:31.921795 | orchestrator | 2026-01-01 02:39:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:31.921854 | orchestrator | 2026-01-01 02:39:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:34.972192 | orchestrator | 2026-01-01 02:39:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:34.973199 | orchestrator | 2026-01-01 02:39:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:34.973263 | orchestrator | 2026-01-01 02:39:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:38.027582 | orchestrator | 2026-01-01 02:39:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:38.030634 | orchestrator | 2026-01-01 02:39:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:38.030696 | orchestrator | 2026-01-01 02:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:41.079091 | orchestrator | 2026-01-01 02:39:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:41.080675 | orchestrator | 2026-01-01 02:39:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:41.080704 | orchestrator | 2026-01-01 02:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:44.120696 | orchestrator | 2026-01-01 02:39:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:44.121553 | orchestrator | 2026-01-01 02:39:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:44.121650 | orchestrator | 2026-01-01 02:39:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:47.172310 | orchestrator | 2026-01-01 02:39:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:47.173667 | orchestrator | 2026-01-01 02:39:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:47.173766 | orchestrator | 2026-01-01 02:39:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:50.223802 | orchestrator | 2026-01-01 02:39:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:50.225139 | orchestrator | 2026-01-01 02:39:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:50.225343 | orchestrator | 2026-01-01 02:39:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:53.278748 | orchestrator | 2026-01-01 02:39:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:53.280720 | orchestrator | 2026-01-01 02:39:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:53.280773 | orchestrator | 2026-01-01 02:39:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:56.332934 | orchestrator | 2026-01-01 02:39:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:56.334725 | orchestrator | 2026-01-01 02:39:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:56.334787 | orchestrator | 2026-01-01 02:39:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:59.386881 | orchestrator | 2026-01-01 02:39:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:39:59.389065 | orchestrator | 2026-01-01 02:39:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:39:59.389250 | orchestrator | 2026-01-01 02:39:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:02.440073 | orchestrator | 2026-01-01 02:40:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:02.442325 | orchestrator | 2026-01-01 02:40:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:02.442711 | orchestrator | 2026-01-01 02:40:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:05.496687 | orchestrator | 2026-01-01 02:40:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:05.497219 | orchestrator | 2026-01-01 02:40:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:05.497354 | orchestrator | 2026-01-01 02:40:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:08.550155 | orchestrator | 2026-01-01 02:40:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:08.552329 | orchestrator | 2026-01-01 02:40:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:08.552845 | orchestrator | 2026-01-01 02:40:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:11.595573 | orchestrator | 2026-01-01 02:40:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:11.596546 | orchestrator | 2026-01-01 02:40:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:11.596687 | orchestrator | 2026-01-01 02:40:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:14.648653 | orchestrator | 2026-01-01 02:40:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:14.650525 | orchestrator | 2026-01-01 02:40:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:14.650603 | orchestrator | 2026-01-01 02:40:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:17.710378 | orchestrator | 2026-01-01 02:40:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:17.711821 | orchestrator | 2026-01-01 02:40:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:17.711860 | orchestrator | 2026-01-01 02:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:20.762770 | orchestrator | 2026-01-01 02:40:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:20.764502 | orchestrator | 2026-01-01 02:40:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:20.764555 | orchestrator | 2026-01-01 02:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:23.819038 | orchestrator | 2026-01-01 02:40:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:23.820932 | orchestrator | 2026-01-01 02:40:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:23.820971 | orchestrator | 2026-01-01 02:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:26.865947 | orchestrator | 2026-01-01 02:40:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:26.867640 | orchestrator | 2026-01-01 02:40:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:26.867689 | orchestrator | 2026-01-01 02:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:29.912353 | orchestrator | 2026-01-01 02:40:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:29.913537 | orchestrator | 2026-01-01 02:40:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:29.913577 | orchestrator | 2026-01-01 02:40:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:32.968973 | orchestrator | 2026-01-01 02:40:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:32.970916 | orchestrator | 2026-01-01 02:40:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:32.970967 | orchestrator | 2026-01-01 02:40:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:36.044539 | orchestrator | 2026-01-01 02:40:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:36.045207 | orchestrator | 2026-01-01 02:40:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:36.045335 | orchestrator | 2026-01-01 02:40:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:39.091190 | orchestrator | 2026-01-01 02:40:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:39.092473 | orchestrator | 2026-01-01 02:40:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:39.092555 | orchestrator | 2026-01-01 02:40:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:42.130811 | orchestrator | 2026-01-01 02:40:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:42.132028 | orchestrator | 2026-01-01 02:40:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:42.132159 | orchestrator | 2026-01-01 02:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:45.180222 | orchestrator | 2026-01-01 02:40:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:45.181821 | orchestrator | 2026-01-01 02:40:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:45.181872 | orchestrator | 2026-01-01 02:40:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:48.241430 | orchestrator | 2026-01-01 02:40:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:48.243797 | orchestrator | 2026-01-01 02:40:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:48.243868 | orchestrator | 2026-01-01 02:40:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:51.291063 | orchestrator | 2026-01-01 02:40:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:51.293384 | orchestrator | 2026-01-01 02:40:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:51.293424 | orchestrator | 2026-01-01 02:40:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:54.337004 | orchestrator | 2026-01-01 02:40:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:54.338299 | orchestrator | 2026-01-01 02:40:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:54.338336 | orchestrator | 2026-01-01 02:40:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:57.389327 | orchestrator | 2026-01-01 02:40:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:40:57.391951 | orchestrator | 2026-01-01 02:40:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:40:57.392041 | orchestrator | 2026-01-01 02:40:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:00.435905 | orchestrator | 2026-01-01 02:41:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:00.437101 | orchestrator | 2026-01-01 02:41:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:00.437118 | orchestrator | 2026-01-01 02:41:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:03.492299 | orchestrator | 2026-01-01 02:41:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:03.493920 | orchestrator | 2026-01-01 02:41:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:03.493962 | orchestrator | 2026-01-01 02:41:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:06.534431 | orchestrator | 2026-01-01 02:41:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:06.535718 | orchestrator | 2026-01-01 02:41:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:06.535766 | orchestrator | 2026-01-01 02:41:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:09.585300 | orchestrator | 2026-01-01 02:41:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:09.587030 | orchestrator | 2026-01-01 02:41:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:09.587116 | orchestrator | 2026-01-01 02:41:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:12.643858 | orchestrator | 2026-01-01 02:41:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:12.645276 | orchestrator | 2026-01-01 02:41:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:12.645453 | orchestrator | 2026-01-01 02:41:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:15.689426 | orchestrator | 2026-01-01 02:41:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:15.689813 | orchestrator | 2026-01-01 02:41:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:15.689850 | orchestrator | 2026-01-01 02:41:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:18.726395 | orchestrator | 2026-01-01 02:41:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:18.726624 | orchestrator | 2026-01-01 02:41:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:18.726646 | orchestrator | 2026-01-01 02:41:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:21.775986 | orchestrator | 2026-01-01 02:41:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:21.777832 | orchestrator | 2026-01-01 02:41:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:21.777898 | orchestrator | 2026-01-01 02:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:24.829435 | orchestrator | 2026-01-01 02:41:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:24.831380 | orchestrator | 2026-01-01 02:41:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:24.831439 | orchestrator | 2026-01-01 02:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:27.879772 | orchestrator | 2026-01-01 02:41:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:27.881967 | orchestrator | 2026-01-01 02:41:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:27.882208 | orchestrator | 2026-01-01 02:41:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:30.932737 | orchestrator | 2026-01-01 02:41:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:30.935571 | orchestrator | 2026-01-01 02:41:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:30.935674 | orchestrator | 2026-01-01 02:41:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:33.989817 | orchestrator | 2026-01-01 02:41:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:33.991874 | orchestrator | 2026-01-01 02:41:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:33.991966 | orchestrator | 2026-01-01 02:41:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:37.034920 | orchestrator | 2026-01-01 02:41:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:37.036696 | orchestrator | 2026-01-01 02:41:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:37.036850 | orchestrator | 2026-01-01 02:41:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:40.079237 | orchestrator | 2026-01-01 02:41:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:40.080441 | orchestrator | 2026-01-01 02:41:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:40.080518 | orchestrator | 2026-01-01 02:41:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:43.126421 | orchestrator | 2026-01-01 02:41:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:43.126636 | orchestrator | 2026-01-01 02:41:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:43.126885 | orchestrator | 2026-01-01 02:41:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:46.185575 | orchestrator | 2026-01-01 02:41:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:46.187687 | orchestrator | 2026-01-01 02:41:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:46.187781 | orchestrator | 2026-01-01 02:41:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:49.244752 | orchestrator | 2026-01-01 02:41:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:49.246915 | orchestrator | 2026-01-01 02:41:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:49.246965 | orchestrator | 2026-01-01 02:41:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:52.297765 | orchestrator | 2026-01-01 02:41:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:52.300516 | orchestrator | 2026-01-01 02:41:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:52.300546 | orchestrator | 2026-01-01 02:41:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:55.342253 | orchestrator | 2026-01-01 02:41:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:55.346151 | orchestrator | 2026-01-01 02:41:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:55.346197 | orchestrator | 2026-01-01 02:41:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:58.391282 | orchestrator | 2026-01-01 02:41:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:41:58.393680 | orchestrator | 2026-01-01 02:41:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:41:58.393782 | orchestrator | 2026-01-01 02:41:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:01.446521 | orchestrator | 2026-01-01 02:42:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:01.451037 | orchestrator | 2026-01-01 02:42:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:01.451135 | orchestrator | 2026-01-01 02:42:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:04.513791 | orchestrator | 2026-01-01 02:42:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:04.516014 | orchestrator | 2026-01-01 02:42:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:04.516056 | orchestrator | 2026-01-01 02:42:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:07.563237 | orchestrator | 2026-01-01 02:42:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:07.565513 | orchestrator | 2026-01-01 02:42:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:07.565530 | orchestrator | 2026-01-01 02:42:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:10.620336 | orchestrator | 2026-01-01 02:42:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:10.622197 | orchestrator | 2026-01-01 02:42:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:10.622321 | orchestrator | 2026-01-01 02:42:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:13.670213 | orchestrator | 2026-01-01 02:42:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:13.671972 | orchestrator | 2026-01-01 02:42:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:13.672010 | orchestrator | 2026-01-01 02:42:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:16.719458 | orchestrator | 2026-01-01 02:42:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:16.721174 | orchestrator | 2026-01-01 02:42:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:16.721212 | orchestrator | 2026-01-01 02:42:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:19.772154 | orchestrator | 2026-01-01 02:42:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:19.775637 | orchestrator | 2026-01-01 02:42:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:19.775690 | orchestrator | 2026-01-01 02:42:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:22.825562 | orchestrator | 2026-01-01 02:42:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:22.826508 | orchestrator | 2026-01-01 02:42:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:22.826558 | orchestrator | 2026-01-01 02:42:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:25.870562 | orchestrator | 2026-01-01 02:42:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:25.870784 | orchestrator | 2026-01-01 02:42:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:25.870809 | orchestrator | 2026-01-01 02:42:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:28.919950 | orchestrator | 2026-01-01 02:42:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:28.921488 | orchestrator | 2026-01-01 02:42:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:28.922314 | orchestrator | 2026-01-01 02:42:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:31.961718 | orchestrator | 2026-01-01 02:42:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:31.963132 | orchestrator | 2026-01-01 02:42:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:31.963462 | orchestrator | 2026-01-01 02:42:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:35.014440 | orchestrator | 2026-01-01 02:42:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:35.018401 | orchestrator | 2026-01-01 02:42:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:35.018448 | orchestrator | 2026-01-01 02:42:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:38.060394 | orchestrator | 2026-01-01 02:42:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:38.064120 | orchestrator | 2026-01-01 02:42:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:38.064239 | orchestrator | 2026-01-01 02:42:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:41.123400 | orchestrator | 2026-01-01 02:42:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:41.124212 | orchestrator | 2026-01-01 02:42:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:41.124262 | orchestrator | 2026-01-01 02:42:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:44.170127 | orchestrator | 2026-01-01 02:42:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:44.172272 | orchestrator | 2026-01-01 02:42:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:44.172286 | orchestrator | 2026-01-01 02:42:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:47.220957 | orchestrator | 2026-01-01 02:42:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:47.222945 | orchestrator | 2026-01-01 02:42:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:47.223012 | orchestrator | 2026-01-01 02:42:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:50.268142 | orchestrator | 2026-01-01 02:42:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:50.271348 | orchestrator | 2026-01-01 02:42:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:50.271407 | orchestrator | 2026-01-01 02:42:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:53.312581 | orchestrator | 2026-01-01 02:42:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:53.314563 | orchestrator | 2026-01-01 02:42:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:53.314601 | orchestrator | 2026-01-01 02:42:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:56.357232 | orchestrator | 2026-01-01 02:42:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:56.361193 | orchestrator | 2026-01-01 02:42:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:56.361246 | orchestrator | 2026-01-01 02:42:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:59.415531 | orchestrator | 2026-01-01 02:42:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:42:59.418841 | orchestrator | 2026-01-01 02:42:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:42:59.418908 | orchestrator | 2026-01-01 02:42:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:02.464288 | orchestrator | 2026-01-01 02:43:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:02.465763 | orchestrator | 2026-01-01 02:43:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:02.465839 | orchestrator | 2026-01-01 02:43:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:05.514943 | orchestrator | 2026-01-01 02:43:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:05.516428 | orchestrator | 2026-01-01 02:43:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:05.516482 | orchestrator | 2026-01-01 02:43:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:08.562659 | orchestrator | 2026-01-01 02:43:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:08.564115 | orchestrator | 2026-01-01 02:43:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:08.564177 | orchestrator | 2026-01-01 02:43:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:11.604478 | orchestrator | 2026-01-01 02:43:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:11.606547 | orchestrator | 2026-01-01 02:43:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:11.606632 | orchestrator | 2026-01-01 02:43:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:14.653833 | orchestrator | 2026-01-01 02:43:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:14.654715 | orchestrator | 2026-01-01 02:43:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:14.654775 | orchestrator | 2026-01-01 02:43:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:17.697039 | orchestrator | 2026-01-01 02:43:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:17.699818 | orchestrator | 2026-01-01 02:43:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:17.699912 | orchestrator | 2026-01-01 02:43:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:20.742494 | orchestrator | 2026-01-01 02:43:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:20.742713 | orchestrator | 2026-01-01 02:43:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:20.742750 | orchestrator | 2026-01-01 02:43:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:23.786989 | orchestrator | 2026-01-01 02:43:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:23.787525 | orchestrator | 2026-01-01 02:43:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:23.787565 | orchestrator | 2026-01-01 02:43:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:26.836386 | orchestrator | 2026-01-01 02:43:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:26.837346 | orchestrator | 2026-01-01 02:43:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:26.837543 | orchestrator | 2026-01-01 02:43:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:29.887507 | orchestrator | 2026-01-01 02:43:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:29.889673 | orchestrator | 2026-01-01 02:43:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:29.889732 | orchestrator | 2026-01-01 02:43:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:32.942923 | orchestrator | 2026-01-01 02:43:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:32.946141 | orchestrator | 2026-01-01 02:43:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:32.946175 | orchestrator | 2026-01-01 02:43:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:35.994336 | orchestrator | 2026-01-01 02:43:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:35.996271 | orchestrator | 2026-01-01 02:43:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:35.996690 | orchestrator | 2026-01-01 02:43:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:39.041099 | orchestrator | 2026-01-01 02:43:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:39.044361 | orchestrator | 2026-01-01 02:43:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:39.044438 | orchestrator | 2026-01-01 02:43:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:42.090241 | orchestrator | 2026-01-01 02:43:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:42.090994 | orchestrator | 2026-01-01 02:43:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:42.091039 | orchestrator | 2026-01-01 02:43:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:45.150760 | orchestrator | 2026-01-01 02:43:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:45.153659 | orchestrator | 2026-01-01 02:43:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:45.153749 | orchestrator | 2026-01-01 02:43:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:48.192707 | orchestrator | 2026-01-01 02:43:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:48.195600 | orchestrator | 2026-01-01 02:43:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:48.195670 | orchestrator | 2026-01-01 02:43:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:51.238561 | orchestrator | 2026-01-01 02:43:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:51.239548 | orchestrator | 2026-01-01 02:43:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:51.239622 | orchestrator | 2026-01-01 02:43:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:54.291931 | orchestrator | 2026-01-01 02:43:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:54.292936 | orchestrator | 2026-01-01 02:43:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:54.292968 | orchestrator | 2026-01-01 02:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:57.341294 | orchestrator | 2026-01-01 02:43:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:43:57.342568 | orchestrator | 2026-01-01 02:43:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:43:57.342607 | orchestrator | 2026-01-01 02:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:00.395694 | orchestrator | 2026-01-01 02:44:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:00.397403 | orchestrator | 2026-01-01 02:44:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:00.397448 | orchestrator | 2026-01-01 02:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:03.447910 | orchestrator | 2026-01-01 02:44:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:03.449129 | orchestrator | 2026-01-01 02:44:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:03.449185 | orchestrator | 2026-01-01 02:44:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:06.502824 | orchestrator | 2026-01-01 02:44:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:06.504457 | orchestrator | 2026-01-01 02:44:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:06.504625 | orchestrator | 2026-01-01 02:44:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:09.564525 | orchestrator | 2026-01-01 02:44:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:09.567236 | orchestrator | 2026-01-01 02:44:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:09.567397 | orchestrator | 2026-01-01 02:44:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:12.625853 | orchestrator | 2026-01-01 02:44:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:12.628690 | orchestrator | 2026-01-01 02:44:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:12.628769 | orchestrator | 2026-01-01 02:44:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:15.675979 | orchestrator | 2026-01-01 02:44:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:15.676707 | orchestrator | 2026-01-01 02:44:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:15.676807 | orchestrator | 2026-01-01 02:44:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:18.731934 | orchestrator | 2026-01-01 02:44:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:18.734395 | orchestrator | 2026-01-01 02:44:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:18.734453 | orchestrator | 2026-01-01 02:44:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:21.791114 | orchestrator | 2026-01-01 02:44:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:21.794347 | orchestrator | 2026-01-01 02:44:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:21.794403 | orchestrator | 2026-01-01 02:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:24.851917 | orchestrator | 2026-01-01 02:44:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:24.854152 | orchestrator | 2026-01-01 02:44:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:24.854216 | orchestrator | 2026-01-01 02:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:27.894632 | orchestrator | 2026-01-01 02:44:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:27.897114 | orchestrator | 2026-01-01 02:44:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:27.897162 | orchestrator | 2026-01-01 02:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:30.948847 | orchestrator | 2026-01-01 02:44:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:30.950718 | orchestrator | 2026-01-01 02:44:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:30.950817 | orchestrator | 2026-01-01 02:44:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:33.997897 | orchestrator | 2026-01-01 02:44:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:33.999514 | orchestrator | 2026-01-01 02:44:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:33.999559 | orchestrator | 2026-01-01 02:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:37.052650 | orchestrator | 2026-01-01 02:44:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:37.054339 | orchestrator | 2026-01-01 02:44:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:37.054461 | orchestrator | 2026-01-01 02:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:40.104432 | orchestrator | 2026-01-01 02:44:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:40.105215 | orchestrator | 2026-01-01 02:44:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:40.105252 | orchestrator | 2026-01-01 02:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:43.147508 | orchestrator | 2026-01-01 02:44:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:43.149850 | orchestrator | 2026-01-01 02:44:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:43.149973 | orchestrator | 2026-01-01 02:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:46.199627 | orchestrator | 2026-01-01 02:44:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:46.200604 | orchestrator | 2026-01-01 02:44:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:46.200785 | orchestrator | 2026-01-01 02:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:49.256430 | orchestrator | 2026-01-01 02:44:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:49.257635 | orchestrator | 2026-01-01 02:44:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:49.257676 | orchestrator | 2026-01-01 02:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:52.305317 | orchestrator | 2026-01-01 02:44:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:52.306515 | orchestrator | 2026-01-01 02:44:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:52.306553 | orchestrator | 2026-01-01 02:44:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:55.356018 | orchestrator | 2026-01-01 02:44:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:55.357672 | orchestrator | 2026-01-01 02:44:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:55.357706 | orchestrator | 2026-01-01 02:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:58.411996 | orchestrator | 2026-01-01 02:44:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:44:58.413706 | orchestrator | 2026-01-01 02:44:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:44:58.413910 | orchestrator | 2026-01-01 02:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:01.465322 | orchestrator | 2026-01-01 02:45:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:01.467092 | orchestrator | 2026-01-01 02:45:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:01.467134 | orchestrator | 2026-01-01 02:45:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:04.516682 | orchestrator | 2026-01-01 02:45:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:04.519488 | orchestrator | 2026-01-01 02:45:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:04.519525 | orchestrator | 2026-01-01 02:45:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:07.562531 | orchestrator | 2026-01-01 02:45:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:07.565850 | orchestrator | 2026-01-01 02:45:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:07.565915 | orchestrator | 2026-01-01 02:45:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:10.618144 | orchestrator | 2026-01-01 02:45:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:10.619587 | orchestrator | 2026-01-01 02:45:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:10.619611 | orchestrator | 2026-01-01 02:45:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:13.676234 | orchestrator | 2026-01-01 02:45:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:13.677865 | orchestrator | 2026-01-01 02:45:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:13.677899 | orchestrator | 2026-01-01 02:45:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:16.723654 | orchestrator | 2026-01-01 02:45:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:16.725109 | orchestrator | 2026-01-01 02:45:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:16.725170 | orchestrator | 2026-01-01 02:45:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:19.774424 | orchestrator | 2026-01-01 02:45:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:19.776599 | orchestrator | 2026-01-01 02:45:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:19.776662 | orchestrator | 2026-01-01 02:45:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:22.831008 | orchestrator | 2026-01-01 02:45:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:22.833726 | orchestrator | 2026-01-01 02:45:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:22.833785 | orchestrator | 2026-01-01 02:45:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:25.883140 | orchestrator | 2026-01-01 02:45:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:25.883843 | orchestrator | 2026-01-01 02:45:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:25.884089 | orchestrator | 2026-01-01 02:45:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:28.935913 | orchestrator | 2026-01-01 02:45:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:28.937922 | orchestrator | 2026-01-01 02:45:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:28.937999 | orchestrator | 2026-01-01 02:45:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:31.986713 | orchestrator | 2026-01-01 02:45:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:31.990593 | orchestrator | 2026-01-01 02:45:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:31.990639 | orchestrator | 2026-01-01 02:45:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:35.041528 | orchestrator | 2026-01-01 02:45:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:35.042578 | orchestrator | 2026-01-01 02:45:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:35.042618 | orchestrator | 2026-01-01 02:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:38.084980 | orchestrator | 2026-01-01 02:45:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:38.087469 | orchestrator | 2026-01-01 02:45:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:38.087548 | orchestrator | 2026-01-01 02:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:41.142250 | orchestrator | 2026-01-01 02:45:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:41.144572 | orchestrator | 2026-01-01 02:45:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:41.144595 | orchestrator | 2026-01-01 02:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:44.190263 | orchestrator | 2026-01-01 02:45:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:44.191933 | orchestrator | 2026-01-01 02:45:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:44.191975 | orchestrator | 2026-01-01 02:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:47.244321 | orchestrator | 2026-01-01 02:45:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:47.244965 | orchestrator | 2026-01-01 02:45:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:47.244989 | orchestrator | 2026-01-01 02:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:50.293331 | orchestrator | 2026-01-01 02:45:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:50.295121 | orchestrator | 2026-01-01 02:45:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:50.295170 | orchestrator | 2026-01-01 02:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:53.344407 | orchestrator | 2026-01-01 02:45:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:53.345581 | orchestrator | 2026-01-01 02:45:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:53.345723 | orchestrator | 2026-01-01 02:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:56.401216 | orchestrator | 2026-01-01 02:45:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:56.403025 | orchestrator | 2026-01-01 02:45:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:56.403462 | orchestrator | 2026-01-01 02:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:59.445419 | orchestrator | 2026-01-01 02:45:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:45:59.446982 | orchestrator | 2026-01-01 02:45:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:45:59.447056 | orchestrator | 2026-01-01 02:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:02.491500 | orchestrator | 2026-01-01 02:46:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:02.493706 | orchestrator | 2026-01-01 02:46:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:02.493794 | orchestrator | 2026-01-01 02:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:05.543306 | orchestrator | 2026-01-01 02:46:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:05.544541 | orchestrator | 2026-01-01 02:46:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:05.544723 | orchestrator | 2026-01-01 02:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:08.602533 | orchestrator | 2026-01-01 02:46:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:08.604707 | orchestrator | 2026-01-01 02:46:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:08.604761 | orchestrator | 2026-01-01 02:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:11.651081 | orchestrator | 2026-01-01 02:46:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:11.652823 | orchestrator | 2026-01-01 02:46:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:11.652938 | orchestrator | 2026-01-01 02:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:14.696268 | orchestrator | 2026-01-01 02:46:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:14.697277 | orchestrator | 2026-01-01 02:46:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:14.697342 | orchestrator | 2026-01-01 02:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:17.747625 | orchestrator | 2026-01-01 02:46:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:17.749875 | orchestrator | 2026-01-01 02:46:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:17.749930 | orchestrator | 2026-01-01 02:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:20.800797 | orchestrator | 2026-01-01 02:46:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:20.803093 | orchestrator | 2026-01-01 02:46:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:20.803148 | orchestrator | 2026-01-01 02:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:23.849791 | orchestrator | 2026-01-01 02:46:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:23.852705 | orchestrator | 2026-01-01 02:46:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:23.852764 | orchestrator | 2026-01-01 02:46:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:26.906960 | orchestrator | 2026-01-01 02:46:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:26.909595 | orchestrator | 2026-01-01 02:46:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:26.909649 | orchestrator | 2026-01-01 02:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:29.957179 | orchestrator | 2026-01-01 02:46:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:29.958345 | orchestrator | 2026-01-01 02:46:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:29.958402 | orchestrator | 2026-01-01 02:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:33.012841 | orchestrator | 2026-01-01 02:46:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:33.014607 | orchestrator | 2026-01-01 02:46:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:33.014645 | orchestrator | 2026-01-01 02:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:36.059887 | orchestrator | 2026-01-01 02:46:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:36.062179 | orchestrator | 2026-01-01 02:46:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:36.062230 | orchestrator | 2026-01-01 02:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:39.112942 | orchestrator | 2026-01-01 02:46:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:39.114784 | orchestrator | 2026-01-01 02:46:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:39.114824 | orchestrator | 2026-01-01 02:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:42.173938 | orchestrator | 2026-01-01 02:46:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:42.174799 | orchestrator | 2026-01-01 02:46:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:42.174844 | orchestrator | 2026-01-01 02:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:45.215493 | orchestrator | 2026-01-01 02:46:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:45.216571 | orchestrator | 2026-01-01 02:46:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:45.216621 | orchestrator | 2026-01-01 02:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:48.263797 | orchestrator | 2026-01-01 02:46:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:48.265274 | orchestrator | 2026-01-01 02:46:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:48.265307 | orchestrator | 2026-01-01 02:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:51.314846 | orchestrator | 2026-01-01 02:46:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:51.317150 | orchestrator | 2026-01-01 02:46:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:51.317195 | orchestrator | 2026-01-01 02:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:54.373361 | orchestrator | 2026-01-01 02:46:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:54.375588 | orchestrator | 2026-01-01 02:46:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:54.375641 | orchestrator | 2026-01-01 02:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:57.425150 | orchestrator | 2026-01-01 02:46:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:46:57.427964 | orchestrator | 2026-01-01 02:46:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:46:57.428050 | orchestrator | 2026-01-01 02:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:00.466974 | orchestrator | 2026-01-01 02:47:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:00.467728 | orchestrator | 2026-01-01 02:47:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:00.467763 | orchestrator | 2026-01-01 02:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:03.517196 | orchestrator | 2026-01-01 02:47:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:03.518716 | orchestrator | 2026-01-01 02:47:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:03.518889 | orchestrator | 2026-01-01 02:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:06.563806 | orchestrator | 2026-01-01 02:47:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:06.565907 | orchestrator | 2026-01-01 02:47:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:06.565985 | orchestrator | 2026-01-01 02:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:09.618289 | orchestrator | 2026-01-01 02:47:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:09.619855 | orchestrator | 2026-01-01 02:47:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:09.619894 | orchestrator | 2026-01-01 02:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:12.674655 | orchestrator | 2026-01-01 02:47:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:12.676446 | orchestrator | 2026-01-01 02:47:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:12.676480 | orchestrator | 2026-01-01 02:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:15.725241 | orchestrator | 2026-01-01 02:47:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:15.726839 | orchestrator | 2026-01-01 02:47:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:15.726900 | orchestrator | 2026-01-01 02:47:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:18.767926 | orchestrator | 2026-01-01 02:47:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:18.768362 | orchestrator | 2026-01-01 02:47:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:18.768394 | orchestrator | 2026-01-01 02:47:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:21.819237 | orchestrator | 2026-01-01 02:47:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:21.821862 | orchestrator | 2026-01-01 02:47:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:21.821901 | orchestrator | 2026-01-01 02:47:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:24.869879 | orchestrator | 2026-01-01 02:47:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:24.872036 | orchestrator | 2026-01-01 02:47:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:24.872061 | orchestrator | 2026-01-01 02:47:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:27.917357 | orchestrator | 2026-01-01 02:47:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:27.919089 | orchestrator | 2026-01-01 02:47:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:27.919108 | orchestrator | 2026-01-01 02:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:30.969689 | orchestrator | 2026-01-01 02:47:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:30.972290 | orchestrator | 2026-01-01 02:47:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:30.972780 | orchestrator | 2026-01-01 02:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:34.018145 | orchestrator | 2026-01-01 02:47:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:34.021704 | orchestrator | 2026-01-01 02:47:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:34.021778 | orchestrator | 2026-01-01 02:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:37.064517 | orchestrator | 2026-01-01 02:47:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:37.067536 | orchestrator | 2026-01-01 02:47:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:37.067630 | orchestrator | 2026-01-01 02:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:40.117860 | orchestrator | 2026-01-01 02:47:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:40.119846 | orchestrator | 2026-01-01 02:47:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:40.119912 | orchestrator | 2026-01-01 02:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:43.167686 | orchestrator | 2026-01-01 02:47:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:43.169895 | orchestrator | 2026-01-01 02:47:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:43.169973 | orchestrator | 2026-01-01 02:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:46.224891 | orchestrator | 2026-01-01 02:47:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:46.227955 | orchestrator | 2026-01-01 02:47:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:46.228061 | orchestrator | 2026-01-01 02:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:49.277482 | orchestrator | 2026-01-01 02:47:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:49.278778 | orchestrator | 2026-01-01 02:47:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:49.278815 | orchestrator | 2026-01-01 02:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:52.322181 | orchestrator | 2026-01-01 02:47:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:52.324029 | orchestrator | 2026-01-01 02:47:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:52.324054 | orchestrator | 2026-01-01 02:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:55.369333 | orchestrator | 2026-01-01 02:47:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:55.370297 | orchestrator | 2026-01-01 02:47:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:55.370383 | orchestrator | 2026-01-01 02:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:58.419188 | orchestrator | 2026-01-01 02:47:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:47:58.420257 | orchestrator | 2026-01-01 02:47:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:47:58.420300 | orchestrator | 2026-01-01 02:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:01.473167 | orchestrator | 2026-01-01 02:48:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:01.474380 | orchestrator | 2026-01-01 02:48:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:01.474414 | orchestrator | 2026-01-01 02:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:04.522385 | orchestrator | 2026-01-01 02:48:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:04.522949 | orchestrator | 2026-01-01 02:48:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:04.523036 | orchestrator | 2026-01-01 02:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:07.571945 | orchestrator | 2026-01-01 02:48:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:07.574008 | orchestrator | 2026-01-01 02:48:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:07.574145 | orchestrator | 2026-01-01 02:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:10.615730 | orchestrator | 2026-01-01 02:48:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:10.616919 | orchestrator | 2026-01-01 02:48:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:10.616950 | orchestrator | 2026-01-01 02:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:13.672593 | orchestrator | 2026-01-01 02:48:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:13.673912 | orchestrator | 2026-01-01 02:48:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:13.673939 | orchestrator | 2026-01-01 02:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:16.722629 | orchestrator | 2026-01-01 02:48:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:16.723777 | orchestrator | 2026-01-01 02:48:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:16.723808 | orchestrator | 2026-01-01 02:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:19.772220 | orchestrator | 2026-01-01 02:48:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:19.775555 | orchestrator | 2026-01-01 02:48:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:19.775651 | orchestrator | 2026-01-01 02:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:22.828144 | orchestrator | 2026-01-01 02:48:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:22.831457 | orchestrator | 2026-01-01 02:48:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:22.831531 | orchestrator | 2026-01-01 02:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:25.881654 | orchestrator | 2026-01-01 02:48:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:25.884400 | orchestrator | 2026-01-01 02:48:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:25.884793 | orchestrator | 2026-01-01 02:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:28.941583 | orchestrator | 2026-01-01 02:48:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:28.942856 | orchestrator | 2026-01-01 02:48:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:28.942895 | orchestrator | 2026-01-01 02:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:31.992100 | orchestrator | 2026-01-01 02:48:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:31.995095 | orchestrator | 2026-01-01 02:48:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:31.995278 | orchestrator | 2026-01-01 02:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:35.039385 | orchestrator | 2026-01-01 02:48:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:35.046075 | orchestrator | 2026-01-01 02:48:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:35.046137 | orchestrator | 2026-01-01 02:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:38.082917 | orchestrator | 2026-01-01 02:48:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:38.083876 | orchestrator | 2026-01-01 02:48:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:38.083905 | orchestrator | 2026-01-01 02:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:41.125300 | orchestrator | 2026-01-01 02:48:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:41.126574 | orchestrator | 2026-01-01 02:48:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:41.126674 | orchestrator | 2026-01-01 02:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:44.171038 | orchestrator | 2026-01-01 02:48:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:44.173322 | orchestrator | 2026-01-01 02:48:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:44.173359 | orchestrator | 2026-01-01 02:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:47.223389 | orchestrator | 2026-01-01 02:48:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:47.224038 | orchestrator | 2026-01-01 02:48:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:47.224057 | orchestrator | 2026-01-01 02:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:50.276566 | orchestrator | 2026-01-01 02:48:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:50.278801 | orchestrator | 2026-01-01 02:48:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:50.278854 | orchestrator | 2026-01-01 02:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:53.334366 | orchestrator | 2026-01-01 02:48:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:53.335325 | orchestrator | 2026-01-01 02:48:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:53.335352 | orchestrator | 2026-01-01 02:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:56.392241 | orchestrator | 2026-01-01 02:48:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:56.395103 | orchestrator | 2026-01-01 02:48:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:56.395230 | orchestrator | 2026-01-01 02:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:59.443980 | orchestrator | 2026-01-01 02:48:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:48:59.444463 | orchestrator | 2026-01-01 02:48:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:48:59.445049 | orchestrator | 2026-01-01 02:48:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:02.495715 | orchestrator | 2026-01-01 02:49:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:02.497838 | orchestrator | 2026-01-01 02:49:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:02.497889 | orchestrator | 2026-01-01 02:49:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:05.552010 | orchestrator | 2026-01-01 02:49:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:05.553680 | orchestrator | 2026-01-01 02:49:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:05.553723 | orchestrator | 2026-01-01 02:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:08.603764 | orchestrator | 2026-01-01 02:49:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:08.605358 | orchestrator | 2026-01-01 02:49:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:08.605410 | orchestrator | 2026-01-01 02:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:11.657473 | orchestrator | 2026-01-01 02:49:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:11.659590 | orchestrator | 2026-01-01 02:49:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:11.659646 | orchestrator | 2026-01-01 02:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:14.711137 | orchestrator | 2026-01-01 02:49:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:14.713185 | orchestrator | 2026-01-01 02:49:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:14.713412 | orchestrator | 2026-01-01 02:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:17.775067 | orchestrator | 2026-01-01 02:49:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:17.776071 | orchestrator | 2026-01-01 02:49:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:17.776167 | orchestrator | 2026-01-01 02:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:20.831129 | orchestrator | 2026-01-01 02:49:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:20.833223 | orchestrator | 2026-01-01 02:49:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:20.833276 | orchestrator | 2026-01-01 02:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:23.878636 | orchestrator | 2026-01-01 02:49:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:23.880795 | orchestrator | 2026-01-01 02:49:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:23.880858 | orchestrator | 2026-01-01 02:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:26.932716 | orchestrator | 2026-01-01 02:49:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:26.933168 | orchestrator | 2026-01-01 02:49:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:26.933209 | orchestrator | 2026-01-01 02:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:29.986160 | orchestrator | 2026-01-01 02:49:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:29.989059 | orchestrator | 2026-01-01 02:49:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:29.989127 | orchestrator | 2026-01-01 02:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:33.042699 | orchestrator | 2026-01-01 02:49:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:33.044782 | orchestrator | 2026-01-01 02:49:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:33.044826 | orchestrator | 2026-01-01 02:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:36.092473 | orchestrator | 2026-01-01 02:49:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:36.094776 | orchestrator | 2026-01-01 02:49:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:36.094830 | orchestrator | 2026-01-01 02:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:39.139606 | orchestrator | 2026-01-01 02:49:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:39.141570 | orchestrator | 2026-01-01 02:49:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:39.141631 | orchestrator | 2026-01-01 02:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:42.184694 | orchestrator | 2026-01-01 02:49:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:42.186835 | orchestrator | 2026-01-01 02:49:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:42.186866 | orchestrator | 2026-01-01 02:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:45.235771 | orchestrator | 2026-01-01 02:49:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:45.237630 | orchestrator | 2026-01-01 02:49:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:45.237717 | orchestrator | 2026-01-01 02:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:48.286675 | orchestrator | 2026-01-01 02:49:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:48.288860 | orchestrator | 2026-01-01 02:49:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:48.288898 | orchestrator | 2026-01-01 02:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:51.336116 | orchestrator | 2026-01-01 02:49:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:51.338368 | orchestrator | 2026-01-01 02:49:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:51.338455 | orchestrator | 2026-01-01 02:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:54.395407 | orchestrator | 2026-01-01 02:49:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:54.397738 | orchestrator | 2026-01-01 02:49:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:54.397823 | orchestrator | 2026-01-01 02:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:57.446789 | orchestrator | 2026-01-01 02:49:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:49:57.448432 | orchestrator | 2026-01-01 02:49:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:49:57.448493 | orchestrator | 2026-01-01 02:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:00.494274 | orchestrator | 2026-01-01 02:50:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:00.495726 | orchestrator | 2026-01-01 02:50:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:00.495773 | orchestrator | 2026-01-01 02:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:03.538857 | orchestrator | 2026-01-01 02:50:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:03.539673 | orchestrator | 2026-01-01 02:50:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:03.539708 | orchestrator | 2026-01-01 02:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:06.578658 | orchestrator | 2026-01-01 02:50:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:06.579780 | orchestrator | 2026-01-01 02:50:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:06.579820 | orchestrator | 2026-01-01 02:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:09.614177 | orchestrator | 2026-01-01 02:50:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:09.615383 | orchestrator | 2026-01-01 02:50:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:09.615411 | orchestrator | 2026-01-01 02:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:12.664858 | orchestrator | 2026-01-01 02:50:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:12.666643 | orchestrator | 2026-01-01 02:50:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:12.666693 | orchestrator | 2026-01-01 02:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:15.711753 | orchestrator | 2026-01-01 02:50:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:15.712036 | orchestrator | 2026-01-01 02:50:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:15.712051 | orchestrator | 2026-01-01 02:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:18.754222 | orchestrator | 2026-01-01 02:50:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:18.757961 | orchestrator | 2026-01-01 02:50:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:18.758313 | orchestrator | 2026-01-01 02:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:21.805764 | orchestrator | 2026-01-01 02:50:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:21.806280 | orchestrator | 2026-01-01 02:50:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:21.806336 | orchestrator | 2026-01-01 02:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:24.857045 | orchestrator | 2026-01-01 02:50:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:24.859841 | orchestrator | 2026-01-01 02:50:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:24.859907 | orchestrator | 2026-01-01 02:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:27.916234 | orchestrator | 2026-01-01 02:50:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:27.917744 | orchestrator | 2026-01-01 02:50:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:27.917785 | orchestrator | 2026-01-01 02:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:30.962068 | orchestrator | 2026-01-01 02:50:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:30.962255 | orchestrator | 2026-01-01 02:50:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:30.962272 | orchestrator | 2026-01-01 02:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:34.003498 | orchestrator | 2026-01-01 02:50:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:34.004259 | orchestrator | 2026-01-01 02:50:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:34.004310 | orchestrator | 2026-01-01 02:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:37.050818 | orchestrator | 2026-01-01 02:50:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:37.052719 | orchestrator | 2026-01-01 02:50:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:37.052774 | orchestrator | 2026-01-01 02:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:40.091435 | orchestrator | 2026-01-01 02:50:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:40.092605 | orchestrator | 2026-01-01 02:50:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:40.092684 | orchestrator | 2026-01-01 02:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:43.125864 | orchestrator | 2026-01-01 02:50:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:43.126740 | orchestrator | 2026-01-01 02:50:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:43.126770 | orchestrator | 2026-01-01 02:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:46.166343 | orchestrator | 2026-01-01 02:50:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:46.167546 | orchestrator | 2026-01-01 02:50:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:46.167580 | orchestrator | 2026-01-01 02:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:49.219403 | orchestrator | 2026-01-01 02:50:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:49.222179 | orchestrator | 2026-01-01 02:50:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:49.222280 | orchestrator | 2026-01-01 02:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:52.260342 | orchestrator | 2026-01-01 02:50:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:52.261354 | orchestrator | 2026-01-01 02:50:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:52.261649 | orchestrator | 2026-01-01 02:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:55.296859 | orchestrator | 2026-01-01 02:50:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:55.297729 | orchestrator | 2026-01-01 02:50:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:55.297927 | orchestrator | 2026-01-01 02:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:58.354851 | orchestrator | 2026-01-01 02:50:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:50:58.357375 | orchestrator | 2026-01-01 02:50:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:50:58.357491 | orchestrator | 2026-01-01 02:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:01.408563 | orchestrator | 2026-01-01 02:51:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:01.410533 | orchestrator | 2026-01-01 02:51:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:01.410574 | orchestrator | 2026-01-01 02:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:04.466423 | orchestrator | 2026-01-01 02:51:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:04.469092 | orchestrator | 2026-01-01 02:51:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:04.469313 | orchestrator | 2026-01-01 02:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:07.523209 | orchestrator | 2026-01-01 02:51:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:07.525842 | orchestrator | 2026-01-01 02:51:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:07.525954 | orchestrator | 2026-01-01 02:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:10.577333 | orchestrator | 2026-01-01 02:51:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:10.579473 | orchestrator | 2026-01-01 02:51:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:10.579525 | orchestrator | 2026-01-01 02:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:13.630855 | orchestrator | 2026-01-01 02:51:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:13.633637 | orchestrator | 2026-01-01 02:51:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:13.633695 | orchestrator | 2026-01-01 02:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:16.682320 | orchestrator | 2026-01-01 02:51:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:16.683700 | orchestrator | 2026-01-01 02:51:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:16.683733 | orchestrator | 2026-01-01 02:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:19.735177 | orchestrator | 2026-01-01 02:51:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:19.738666 | orchestrator | 2026-01-01 02:51:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:19.738783 | orchestrator | 2026-01-01 02:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:22.789415 | orchestrator | 2026-01-01 02:51:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:22.791625 | orchestrator | 2026-01-01 02:51:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:22.791694 | orchestrator | 2026-01-01 02:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:25.843257 | orchestrator | 2026-01-01 02:51:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:25.845674 | orchestrator | 2026-01-01 02:51:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:25.845726 | orchestrator | 2026-01-01 02:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:28.902300 | orchestrator | 2026-01-01 02:51:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:28.903497 | orchestrator | 2026-01-01 02:51:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:28.903590 | orchestrator | 2026-01-01 02:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:31.950936 | orchestrator | 2026-01-01 02:51:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:31.952545 | orchestrator | 2026-01-01 02:51:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:31.952581 | orchestrator | 2026-01-01 02:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:35.014947 | orchestrator | 2026-01-01 02:51:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:35.016772 | orchestrator | 2026-01-01 02:51:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:35.016820 | orchestrator | 2026-01-01 02:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:38.066334 | orchestrator | 2026-01-01 02:51:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:38.067362 | orchestrator | 2026-01-01 02:51:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:38.067404 | orchestrator | 2026-01-01 02:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:41.110844 | orchestrator | 2026-01-01 02:51:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:41.112870 | orchestrator | 2026-01-01 02:51:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:41.112904 | orchestrator | 2026-01-01 02:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:44.155510 | orchestrator | 2026-01-01 02:51:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:44.158286 | orchestrator | 2026-01-01 02:51:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:44.158377 | orchestrator | 2026-01-01 02:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:47.204446 | orchestrator | 2026-01-01 02:51:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:47.207487 | orchestrator | 2026-01-01 02:51:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:47.207519 | orchestrator | 2026-01-01 02:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:50.257553 | orchestrator | 2026-01-01 02:51:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:50.258876 | orchestrator | 2026-01-01 02:51:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:50.258919 | orchestrator | 2026-01-01 02:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:53.307244 | orchestrator | 2026-01-01 02:51:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:53.309995 | orchestrator | 2026-01-01 02:51:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:53.310670 | orchestrator | 2026-01-01 02:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:56.362874 | orchestrator | 2026-01-01 02:51:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:56.363755 | orchestrator | 2026-01-01 02:51:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:56.363947 | orchestrator | 2026-01-01 02:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:59.414974 | orchestrator | 2026-01-01 02:51:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:51:59.416397 | orchestrator | 2026-01-01 02:51:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:51:59.416431 | orchestrator | 2026-01-01 02:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:02.468278 | orchestrator | 2026-01-01 02:52:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:02.469551 | orchestrator | 2026-01-01 02:52:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:02.469590 | orchestrator | 2026-01-01 02:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:05.515600 | orchestrator | 2026-01-01 02:52:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:05.517419 | orchestrator | 2026-01-01 02:52:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:05.517455 | orchestrator | 2026-01-01 02:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:08.563831 | orchestrator | 2026-01-01 02:52:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:08.564534 | orchestrator | 2026-01-01 02:52:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:08.564556 | orchestrator | 2026-01-01 02:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:11.617285 | orchestrator | 2026-01-01 02:52:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:11.619630 | orchestrator | 2026-01-01 02:52:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:11.619697 | orchestrator | 2026-01-01 02:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:14.665564 | orchestrator | 2026-01-01 02:52:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:14.668355 | orchestrator | 2026-01-01 02:52:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:14.668425 | orchestrator | 2026-01-01 02:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:17.715996 | orchestrator | 2026-01-01 02:52:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:17.717693 | orchestrator | 2026-01-01 02:52:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:17.717787 | orchestrator | 2026-01-01 02:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:20.763023 | orchestrator | 2026-01-01 02:52:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:20.765354 | orchestrator | 2026-01-01 02:52:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:20.765857 | orchestrator | 2026-01-01 02:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:23.807616 | orchestrator | 2026-01-01 02:52:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:23.809889 | orchestrator | 2026-01-01 02:52:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:23.809942 | orchestrator | 2026-01-01 02:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:26.856853 | orchestrator | 2026-01-01 02:52:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:26.858592 | orchestrator | 2026-01-01 02:52:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:26.858650 | orchestrator | 2026-01-01 02:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:29.909281 | orchestrator | 2026-01-01 02:52:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:29.910703 | orchestrator | 2026-01-01 02:52:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:29.911368 | orchestrator | 2026-01-01 02:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:32.957966 | orchestrator | 2026-01-01 02:52:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:32.959267 | orchestrator | 2026-01-01 02:52:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:32.959341 | orchestrator | 2026-01-01 02:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:36.006676 | orchestrator | 2026-01-01 02:52:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:36.009652 | orchestrator | 2026-01-01 02:52:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:36.009781 | orchestrator | 2026-01-01 02:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:39.043102 | orchestrator | 2026-01-01 02:52:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:39.046391 | orchestrator | 2026-01-01 02:52:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:39.046463 | orchestrator | 2026-01-01 02:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:42.090393 | orchestrator | 2026-01-01 02:52:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:42.092319 | orchestrator | 2026-01-01 02:52:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:42.092374 | orchestrator | 2026-01-01 02:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:45.135986 | orchestrator | 2026-01-01 02:52:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:45.137275 | orchestrator | 2026-01-01 02:52:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:45.137333 | orchestrator | 2026-01-01 02:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:48.188655 | orchestrator | 2026-01-01 02:52:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:48.190774 | orchestrator | 2026-01-01 02:52:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:48.190823 | orchestrator | 2026-01-01 02:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:51.242541 | orchestrator | 2026-01-01 02:52:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:51.244408 | orchestrator | 2026-01-01 02:52:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:51.244466 | orchestrator | 2026-01-01 02:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:54.298857 | orchestrator | 2026-01-01 02:52:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:54.299482 | orchestrator | 2026-01-01 02:52:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:54.299515 | orchestrator | 2026-01-01 02:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:57.343425 | orchestrator | 2026-01-01 02:52:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:52:57.344373 | orchestrator | 2026-01-01 02:52:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:52:57.344428 | orchestrator | 2026-01-01 02:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:00.392099 | orchestrator | 2026-01-01 02:53:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:00.393412 | orchestrator | 2026-01-01 02:53:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:00.393453 | orchestrator | 2026-01-01 02:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:03.440598 | orchestrator | 2026-01-01 02:53:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:03.441710 | orchestrator | 2026-01-01 02:53:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:03.441738 | orchestrator | 2026-01-01 02:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:06.488589 | orchestrator | 2026-01-01 02:53:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:06.491609 | orchestrator | 2026-01-01 02:53:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:06.491648 | orchestrator | 2026-01-01 02:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:09.537534 | orchestrator | 2026-01-01 02:53:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:09.538478 | orchestrator | 2026-01-01 02:53:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:09.538508 | orchestrator | 2026-01-01 02:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:12.586786 | orchestrator | 2026-01-01 02:53:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:12.588382 | orchestrator | 2026-01-01 02:53:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:12.588416 | orchestrator | 2026-01-01 02:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:15.634864 | orchestrator | 2026-01-01 02:53:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:15.636761 | orchestrator | 2026-01-01 02:53:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:15.636812 | orchestrator | 2026-01-01 02:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:18.684885 | orchestrator | 2026-01-01 02:53:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:18.686869 | orchestrator | 2026-01-01 02:53:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:18.686933 | orchestrator | 2026-01-01 02:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:21.736625 | orchestrator | 2026-01-01 02:53:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:21.738373 | orchestrator | 2026-01-01 02:53:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:21.738402 | orchestrator | 2026-01-01 02:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:24.786908 | orchestrator | 2026-01-01 02:53:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:24.787926 | orchestrator | 2026-01-01 02:53:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:24.787962 | orchestrator | 2026-01-01 02:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:27.844407 | orchestrator | 2026-01-01 02:53:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:27.848588 | orchestrator | 2026-01-01 02:53:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:27.848797 | orchestrator | 2026-01-01 02:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:30.904086 | orchestrator | 2026-01-01 02:53:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:30.904992 | orchestrator | 2026-01-01 02:53:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:30.905127 | orchestrator | 2026-01-01 02:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:33.948175 | orchestrator | 2026-01-01 02:53:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:33.949646 | orchestrator | 2026-01-01 02:53:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:33.949735 | orchestrator | 2026-01-01 02:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:36.999661 | orchestrator | 2026-01-01 02:53:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:37.001369 | orchestrator | 2026-01-01 02:53:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:37.001416 | orchestrator | 2026-01-01 02:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:40.047792 | orchestrator | 2026-01-01 02:53:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:40.048329 | orchestrator | 2026-01-01 02:53:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:40.048701 | orchestrator | 2026-01-01 02:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:43.088326 | orchestrator | 2026-01-01 02:53:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:43.090673 | orchestrator | 2026-01-01 02:53:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:43.090735 | orchestrator | 2026-01-01 02:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:46.135850 | orchestrator | 2026-01-01 02:53:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:46.137659 | orchestrator | 2026-01-01 02:53:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:46.137704 | orchestrator | 2026-01-01 02:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:49.189735 | orchestrator | 2026-01-01 02:53:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:49.191992 | orchestrator | 2026-01-01 02:53:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:49.192049 | orchestrator | 2026-01-01 02:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:52.247462 | orchestrator | 2026-01-01 02:53:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:52.249378 | orchestrator | 2026-01-01 02:53:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:52.249644 | orchestrator | 2026-01-01 02:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:55.295875 | orchestrator | 2026-01-01 02:53:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:55.298631 | orchestrator | 2026-01-01 02:53:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:55.298717 | orchestrator | 2026-01-01 02:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:58.343570 | orchestrator | 2026-01-01 02:53:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:53:58.345060 | orchestrator | 2026-01-01 02:53:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:53:58.345174 | orchestrator | 2026-01-01 02:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:01.379267 | orchestrator | 2026-01-01 02:54:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:01.380859 | orchestrator | 2026-01-01 02:54:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:01.380913 | orchestrator | 2026-01-01 02:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:04.437698 | orchestrator | 2026-01-01 02:54:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:04.440109 | orchestrator | 2026-01-01 02:54:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:04.440151 | orchestrator | 2026-01-01 02:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:07.490860 | orchestrator | 2026-01-01 02:54:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:07.492121 | orchestrator | 2026-01-01 02:54:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:07.492177 | orchestrator | 2026-01-01 02:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:10.536584 | orchestrator | 2026-01-01 02:54:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:10.538914 | orchestrator | 2026-01-01 02:54:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:10.538960 | orchestrator | 2026-01-01 02:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:13.579529 | orchestrator | 2026-01-01 02:54:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:13.580525 | orchestrator | 2026-01-01 02:54:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:13.581464 | orchestrator | 2026-01-01 02:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:16.626115 | orchestrator | 2026-01-01 02:54:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:16.627231 | orchestrator | 2026-01-01 02:54:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:16.627368 | orchestrator | 2026-01-01 02:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:19.667303 | orchestrator | 2026-01-01 02:54:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:19.669928 | orchestrator | 2026-01-01 02:54:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:19.669955 | orchestrator | 2026-01-01 02:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:22.714997 | orchestrator | 2026-01-01 02:54:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:22.717178 | orchestrator | 2026-01-01 02:54:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:22.717215 | orchestrator | 2026-01-01 02:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:25.769407 | orchestrator | 2026-01-01 02:54:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:25.772185 | orchestrator | 2026-01-01 02:54:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:25.772252 | orchestrator | 2026-01-01 02:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:28.823745 | orchestrator | 2026-01-01 02:54:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:28.824930 | orchestrator | 2026-01-01 02:54:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:28.824976 | orchestrator | 2026-01-01 02:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:31.873762 | orchestrator | 2026-01-01 02:54:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:31.877434 | orchestrator | 2026-01-01 02:54:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:31.877492 | orchestrator | 2026-01-01 02:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:34.924884 | orchestrator | 2026-01-01 02:54:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:34.926665 | orchestrator | 2026-01-01 02:54:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:34.927055 | orchestrator | 2026-01-01 02:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:37.972391 | orchestrator | 2026-01-01 02:54:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:37.974400 | orchestrator | 2026-01-01 02:54:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:37.974458 | orchestrator | 2026-01-01 02:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:41.032930 | orchestrator | 2026-01-01 02:54:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:41.034671 | orchestrator | 2026-01-01 02:54:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:41.034738 | orchestrator | 2026-01-01 02:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:44.082606 | orchestrator | 2026-01-01 02:54:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:44.083703 | orchestrator | 2026-01-01 02:54:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:44.083722 | orchestrator | 2026-01-01 02:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:47.127660 | orchestrator | 2026-01-01 02:54:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:47.129507 | orchestrator | 2026-01-01 02:54:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:47.129571 | orchestrator | 2026-01-01 02:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:50.173730 | orchestrator | 2026-01-01 02:54:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:50.174773 | orchestrator | 2026-01-01 02:54:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:50.174828 | orchestrator | 2026-01-01 02:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:53.224935 | orchestrator | 2026-01-01 02:54:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:53.226544 | orchestrator | 2026-01-01 02:54:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:53.226614 | orchestrator | 2026-01-01 02:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:56.267924 | orchestrator | 2026-01-01 02:54:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:56.270178 | orchestrator | 2026-01-01 02:54:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:56.270485 | orchestrator | 2026-01-01 02:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:59.319916 | orchestrator | 2026-01-01 02:54:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:54:59.322083 | orchestrator | 2026-01-01 02:54:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:54:59.322147 | orchestrator | 2026-01-01 02:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:02.368999 | orchestrator | 2026-01-01 02:55:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:02.370829 | orchestrator | 2026-01-01 02:55:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:02.370890 | orchestrator | 2026-01-01 02:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:05.414211 | orchestrator | 2026-01-01 02:55:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:05.415513 | orchestrator | 2026-01-01 02:55:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:05.415624 | orchestrator | 2026-01-01 02:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:08.462616 | orchestrator | 2026-01-01 02:55:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:08.463625 | orchestrator | 2026-01-01 02:55:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:08.463654 | orchestrator | 2026-01-01 02:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:11.511801 | orchestrator | 2026-01-01 02:55:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:11.513740 | orchestrator | 2026-01-01 02:55:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:11.513854 | orchestrator | 2026-01-01 02:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:14.560034 | orchestrator | 2026-01-01 02:55:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:14.562636 | orchestrator | 2026-01-01 02:55:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:14.562698 | orchestrator | 2026-01-01 02:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:17.603611 | orchestrator | 2026-01-01 02:55:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:17.606096 | orchestrator | 2026-01-01 02:55:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:17.606186 | orchestrator | 2026-01-01 02:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:20.639839 | orchestrator | 2026-01-01 02:55:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:20.640663 | orchestrator | 2026-01-01 02:55:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:20.640764 | orchestrator | 2026-01-01 02:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:23.680465 | orchestrator | 2026-01-01 02:55:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:23.684335 | orchestrator | 2026-01-01 02:55:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:23.684946 | orchestrator | 2026-01-01 02:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:26.724683 | orchestrator | 2026-01-01 02:55:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:26.727751 | orchestrator | 2026-01-01 02:55:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:26.727825 | orchestrator | 2026-01-01 02:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:29.773072 | orchestrator | 2026-01-01 02:55:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:29.773800 | orchestrator | 2026-01-01 02:55:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:29.773859 | orchestrator | 2026-01-01 02:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:32.824856 | orchestrator | 2026-01-01 02:55:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:32.827525 | orchestrator | 2026-01-01 02:55:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:32.827607 | orchestrator | 2026-01-01 02:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:35.874110 | orchestrator | 2026-01-01 02:55:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:35.876263 | orchestrator | 2026-01-01 02:55:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:35.876308 | orchestrator | 2026-01-01 02:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:38.927868 | orchestrator | 2026-01-01 02:55:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:38.928967 | orchestrator | 2026-01-01 02:55:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:38.929021 | orchestrator | 2026-01-01 02:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:41.975264 | orchestrator | 2026-01-01 02:55:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:41.975749 | orchestrator | 2026-01-01 02:55:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:41.975781 | orchestrator | 2026-01-01 02:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:45.013961 | orchestrator | 2026-01-01 02:55:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:45.015861 | orchestrator | 2026-01-01 02:55:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:45.015898 | orchestrator | 2026-01-01 02:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:48.055939 | orchestrator | 2026-01-01 02:55:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:48.057652 | orchestrator | 2026-01-01 02:55:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:48.057694 | orchestrator | 2026-01-01 02:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:51.098794 | orchestrator | 2026-01-01 02:55:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:51.099190 | orchestrator | 2026-01-01 02:55:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:51.099220 | orchestrator | 2026-01-01 02:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:54.149420 | orchestrator | 2026-01-01 02:55:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:54.150973 | orchestrator | 2026-01-01 02:55:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:54.151029 | orchestrator | 2026-01-01 02:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:57.190998 | orchestrator | 2026-01-01 02:55:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:55:57.192414 | orchestrator | 2026-01-01 02:55:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:55:57.192623 | orchestrator | 2026-01-01 02:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:00.238543 | orchestrator | 2026-01-01 02:56:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:00.241968 | orchestrator | 2026-01-01 02:56:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:00.242073 | orchestrator | 2026-01-01 02:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:03.295076 | orchestrator | 2026-01-01 02:56:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:03.296799 | orchestrator | 2026-01-01 02:56:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:03.296840 | orchestrator | 2026-01-01 02:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:06.340156 | orchestrator | 2026-01-01 02:56:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:06.342946 | orchestrator | 2026-01-01 02:56:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:06.343008 | orchestrator | 2026-01-01 02:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:09.394258 | orchestrator | 2026-01-01 02:56:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:09.398065 | orchestrator | 2026-01-01 02:56:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:09.398131 | orchestrator | 2026-01-01 02:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:12.446359 | orchestrator | 2026-01-01 02:56:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:12.447387 | orchestrator | 2026-01-01 02:56:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:12.447447 | orchestrator | 2026-01-01 02:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:15.505871 | orchestrator | 2026-01-01 02:56:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:15.506816 | orchestrator | 2026-01-01 02:56:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:15.506854 | orchestrator | 2026-01-01 02:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:18.554877 | orchestrator | 2026-01-01 02:56:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:18.554983 | orchestrator | 2026-01-01 02:56:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:18.554998 | orchestrator | 2026-01-01 02:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:21.609228 | orchestrator | 2026-01-01 02:56:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:21.612391 | orchestrator | 2026-01-01 02:56:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:21.612522 | orchestrator | 2026-01-01 02:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:24.677185 | orchestrator | 2026-01-01 02:56:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:24.678682 | orchestrator | 2026-01-01 02:56:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:24.678986 | orchestrator | 2026-01-01 02:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:27.726007 | orchestrator | 2026-01-01 02:56:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:27.727812 | orchestrator | 2026-01-01 02:56:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:27.727842 | orchestrator | 2026-01-01 02:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:30.778906 | orchestrator | 2026-01-01 02:56:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:30.779696 | orchestrator | 2026-01-01 02:56:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:30.779855 | orchestrator | 2026-01-01 02:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:33.826944 | orchestrator | 2026-01-01 02:56:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:33.831606 | orchestrator | 2026-01-01 02:56:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:33.831644 | orchestrator | 2026-01-01 02:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:36.888117 | orchestrator | 2026-01-01 02:56:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:36.890186 | orchestrator | 2026-01-01 02:56:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:36.890248 | orchestrator | 2026-01-01 02:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:39.946209 | orchestrator | 2026-01-01 02:56:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:39.949202 | orchestrator | 2026-01-01 02:56:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:39.949325 | orchestrator | 2026-01-01 02:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:42.986326 | orchestrator | 2026-01-01 02:56:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:42.987717 | orchestrator | 2026-01-01 02:56:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:42.987774 | orchestrator | 2026-01-01 02:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:46.054286 | orchestrator | 2026-01-01 02:56:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:46.057223 | orchestrator | 2026-01-01 02:56:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:46.057269 | orchestrator | 2026-01-01 02:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:49.103798 | orchestrator | 2026-01-01 02:56:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:49.106662 | orchestrator | 2026-01-01 02:56:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:49.106708 | orchestrator | 2026-01-01 02:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:52.164619 | orchestrator | 2026-01-01 02:56:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:52.167674 | orchestrator | 2026-01-01 02:56:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:52.167743 | orchestrator | 2026-01-01 02:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:55.217915 | orchestrator | 2026-01-01 02:56:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:55.219787 | orchestrator | 2026-01-01 02:56:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:55.219851 | orchestrator | 2026-01-01 02:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:58.271217 | orchestrator | 2026-01-01 02:56:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:56:58.273104 | orchestrator | 2026-01-01 02:56:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:56:58.273220 | orchestrator | 2026-01-01 02:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:01.316022 | orchestrator | 2026-01-01 02:57:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:01.317773 | orchestrator | 2026-01-01 02:57:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:01.317852 | orchestrator | 2026-01-01 02:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:04.366398 | orchestrator | 2026-01-01 02:57:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:04.370857 | orchestrator | 2026-01-01 02:57:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:04.370932 | orchestrator | 2026-01-01 02:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:07.423554 | orchestrator | 2026-01-01 02:57:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:07.425389 | orchestrator | 2026-01-01 02:57:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:07.425472 | orchestrator | 2026-01-01 02:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:10.483072 | orchestrator | 2026-01-01 02:57:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:10.485342 | orchestrator | 2026-01-01 02:57:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:10.485427 | orchestrator | 2026-01-01 02:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:13.538728 | orchestrator | 2026-01-01 02:57:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:13.539082 | orchestrator | 2026-01-01 02:57:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:13.539422 | orchestrator | 2026-01-01 02:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:16.593888 | orchestrator | 2026-01-01 02:57:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:16.595519 | orchestrator | 2026-01-01 02:57:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:16.595579 | orchestrator | 2026-01-01 02:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:19.643846 | orchestrator | 2026-01-01 02:57:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:19.645878 | orchestrator | 2026-01-01 02:57:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:19.645933 | orchestrator | 2026-01-01 02:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:22.695450 | orchestrator | 2026-01-01 02:57:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:22.697209 | orchestrator | 2026-01-01 02:57:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:22.697256 | orchestrator | 2026-01-01 02:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:25.747297 | orchestrator | 2026-01-01 02:57:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:25.749034 | orchestrator | 2026-01-01 02:57:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:25.749085 | orchestrator | 2026-01-01 02:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:28.808854 | orchestrator | 2026-01-01 02:57:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:28.812510 | orchestrator | 2026-01-01 02:57:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:28.812584 | orchestrator | 2026-01-01 02:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:31.866312 | orchestrator | 2026-01-01 02:57:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:31.867976 | orchestrator | 2026-01-01 02:57:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:31.868115 | orchestrator | 2026-01-01 02:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:34.920444 | orchestrator | 2026-01-01 02:57:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:34.920783 | orchestrator | 2026-01-01 02:57:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:34.920813 | orchestrator | 2026-01-01 02:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:37.979025 | orchestrator | 2026-01-01 02:57:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:37.981025 | orchestrator | 2026-01-01 02:57:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:37.981064 | orchestrator | 2026-01-01 02:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:41.046447 | orchestrator | 2026-01-01 02:57:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:41.048633 | orchestrator | 2026-01-01 02:57:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:41.048691 | orchestrator | 2026-01-01 02:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:44.095001 | orchestrator | 2026-01-01 02:57:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:44.095672 | orchestrator | 2026-01-01 02:57:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:44.095699 | orchestrator | 2026-01-01 02:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:47.147218 | orchestrator | 2026-01-01 02:57:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:47.148876 | orchestrator | 2026-01-01 02:57:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:47.148926 | orchestrator | 2026-01-01 02:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:50.196303 | orchestrator | 2026-01-01 02:57:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:50.199399 | orchestrator | 2026-01-01 02:57:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:50.199516 | orchestrator | 2026-01-01 02:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:53.250258 | orchestrator | 2026-01-01 02:57:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:53.252352 | orchestrator | 2026-01-01 02:57:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:53.252393 | orchestrator | 2026-01-01 02:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:56.300653 | orchestrator | 2026-01-01 02:57:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:56.304378 | orchestrator | 2026-01-01 02:57:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:56.304437 | orchestrator | 2026-01-01 02:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:59.365556 | orchestrator | 2026-01-01 02:57:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:57:59.368203 | orchestrator | 2026-01-01 02:57:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:57:59.368251 | orchestrator | 2026-01-01 02:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:02.427420 | orchestrator | 2026-01-01 02:58:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:02.429774 | orchestrator | 2026-01-01 02:58:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:02.429894 | orchestrator | 2026-01-01 02:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:05.476200 | orchestrator | 2026-01-01 02:58:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:05.478058 | orchestrator | 2026-01-01 02:58:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:05.478119 | orchestrator | 2026-01-01 02:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:08.522121 | orchestrator | 2026-01-01 02:58:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:08.524356 | orchestrator | 2026-01-01 02:58:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:08.524421 | orchestrator | 2026-01-01 02:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:11.568099 | orchestrator | 2026-01-01 02:58:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:11.570360 | orchestrator | 2026-01-01 02:58:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:11.570398 | orchestrator | 2026-01-01 02:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:14.611324 | orchestrator | 2026-01-01 02:58:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:14.612745 | orchestrator | 2026-01-01 02:58:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:14.612789 | orchestrator | 2026-01-01 02:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:17.668319 | orchestrator | 2026-01-01 02:58:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:17.670007 | orchestrator | 2026-01-01 02:58:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:17.670199 | orchestrator | 2026-01-01 02:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:20.716022 | orchestrator | 2026-01-01 02:58:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:20.717261 | orchestrator | 2026-01-01 02:58:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:20.717438 | orchestrator | 2026-01-01 02:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:23.761006 | orchestrator | 2026-01-01 02:58:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:23.762110 | orchestrator | 2026-01-01 02:58:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:23.762160 | orchestrator | 2026-01-01 02:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:26.812861 | orchestrator | 2026-01-01 02:58:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:26.814338 | orchestrator | 2026-01-01 02:58:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:26.814381 | orchestrator | 2026-01-01 02:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:29.866385 | orchestrator | 2026-01-01 02:58:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:29.868147 | orchestrator | 2026-01-01 02:58:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:29.868295 | orchestrator | 2026-01-01 02:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:32.922332 | orchestrator | 2026-01-01 02:58:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:32.923573 | orchestrator | 2026-01-01 02:58:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:32.923784 | orchestrator | 2026-01-01 02:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:35.967794 | orchestrator | 2026-01-01 02:58:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:35.968683 | orchestrator | 2026-01-01 02:58:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:35.968716 | orchestrator | 2026-01-01 02:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:39.014688 | orchestrator | 2026-01-01 02:58:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:39.017239 | orchestrator | 2026-01-01 02:58:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:39.017294 | orchestrator | 2026-01-01 02:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:42.076426 | orchestrator | 2026-01-01 02:58:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:42.077942 | orchestrator | 2026-01-01 02:58:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:42.077973 | orchestrator | 2026-01-01 02:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:45.123504 | orchestrator | 2026-01-01 02:58:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:45.125441 | orchestrator | 2026-01-01 02:58:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:45.125512 | orchestrator | 2026-01-01 02:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:48.176402 | orchestrator | 2026-01-01 02:58:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:48.179297 | orchestrator | 2026-01-01 02:58:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:48.179333 | orchestrator | 2026-01-01 02:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:51.234238 | orchestrator | 2026-01-01 02:58:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:51.236159 | orchestrator | 2026-01-01 02:58:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:51.236207 | orchestrator | 2026-01-01 02:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:54.285299 | orchestrator | 2026-01-01 02:58:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:54.287650 | orchestrator | 2026-01-01 02:58:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:54.287688 | orchestrator | 2026-01-01 02:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:57.331306 | orchestrator | 2026-01-01 02:58:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:58:57.333758 | orchestrator | 2026-01-01 02:58:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:58:57.333807 | orchestrator | 2026-01-01 02:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:00.385368 | orchestrator | 2026-01-01 02:59:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:00.386975 | orchestrator | 2026-01-01 02:59:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:00.387011 | orchestrator | 2026-01-01 02:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:03.446339 | orchestrator | 2026-01-01 02:59:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:03.448160 | orchestrator | 2026-01-01 02:59:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:03.448191 | orchestrator | 2026-01-01 02:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:06.493417 | orchestrator | 2026-01-01 02:59:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:06.497147 | orchestrator | 2026-01-01 02:59:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:06.497238 | orchestrator | 2026-01-01 02:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:09.545480 | orchestrator | 2026-01-01 02:59:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:09.547180 | orchestrator | 2026-01-01 02:59:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:09.547352 | orchestrator | 2026-01-01 02:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:12.594742 | orchestrator | 2026-01-01 02:59:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:12.595589 | orchestrator | 2026-01-01 02:59:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:12.595675 | orchestrator | 2026-01-01 02:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:15.640244 | orchestrator | 2026-01-01 02:59:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:15.642296 | orchestrator | 2026-01-01 02:59:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:15.642336 | orchestrator | 2026-01-01 02:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:18.692880 | orchestrator | 2026-01-01 02:59:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:18.694856 | orchestrator | 2026-01-01 02:59:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:18.694918 | orchestrator | 2026-01-01 02:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:21.736123 | orchestrator | 2026-01-01 02:59:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:21.736800 | orchestrator | 2026-01-01 02:59:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:21.736839 | orchestrator | 2026-01-01 02:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:24.781666 | orchestrator | 2026-01-01 02:59:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:24.783212 | orchestrator | 2026-01-01 02:59:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:24.783264 | orchestrator | 2026-01-01 02:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:27.832125 | orchestrator | 2026-01-01 02:59:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:27.835098 | orchestrator | 2026-01-01 02:59:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:27.835181 | orchestrator | 2026-01-01 02:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:30.883262 | orchestrator | 2026-01-01 02:59:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:30.886656 | orchestrator | 2026-01-01 02:59:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:30.886714 | orchestrator | 2026-01-01 02:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:33.929720 | orchestrator | 2026-01-01 02:59:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:33.931124 | orchestrator | 2026-01-01 02:59:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:33.931180 | orchestrator | 2026-01-01 02:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:36.978234 | orchestrator | 2026-01-01 02:59:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:36.979896 | orchestrator | 2026-01-01 02:59:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:36.979946 | orchestrator | 2026-01-01 02:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:40.034323 | orchestrator | 2026-01-01 02:59:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:40.037180 | orchestrator | 2026-01-01 02:59:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:40.037289 | orchestrator | 2026-01-01 02:59:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:43.081331 | orchestrator | 2026-01-01 02:59:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:43.083532 | orchestrator | 2026-01-01 02:59:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:43.083659 | orchestrator | 2026-01-01 02:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:46.126168 | orchestrator | 2026-01-01 02:59:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:46.127861 | orchestrator | 2026-01-01 02:59:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:46.127920 | orchestrator | 2026-01-01 02:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:49.172972 | orchestrator | 2026-01-01 02:59:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:49.174073 | orchestrator | 2026-01-01 02:59:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:49.174252 | orchestrator | 2026-01-01 02:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:52.225850 | orchestrator | 2026-01-01 02:59:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:52.228473 | orchestrator | 2026-01-01 02:59:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:52.228505 | orchestrator | 2026-01-01 02:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:55.280520 | orchestrator | 2026-01-01 02:59:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:55.282354 | orchestrator | 2026-01-01 02:59:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:55.282395 | orchestrator | 2026-01-01 02:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:58.330414 | orchestrator | 2026-01-01 02:59:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 02:59:58.331407 | orchestrator | 2026-01-01 02:59:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 02:59:58.331444 | orchestrator | 2026-01-01 02:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:01.385121 | orchestrator | 2026-01-01 03:00:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:01.386650 | orchestrator | 2026-01-01 03:00:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:01.386714 | orchestrator | 2026-01-01 03:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:04.438711 | orchestrator | 2026-01-01 03:00:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:04.440330 | orchestrator | 2026-01-01 03:00:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:04.440377 | orchestrator | 2026-01-01 03:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:07.488256 | orchestrator | 2026-01-01 03:00:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:07.490202 | orchestrator | 2026-01-01 03:00:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:07.490230 | orchestrator | 2026-01-01 03:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:10.535705 | orchestrator | 2026-01-01 03:00:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:10.536878 | orchestrator | 2026-01-01 03:00:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:10.536971 | orchestrator | 2026-01-01 03:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:13.577765 | orchestrator | 2026-01-01 03:00:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:13.580070 | orchestrator | 2026-01-01 03:00:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:13.580122 | orchestrator | 2026-01-01 03:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:16.634914 | orchestrator | 2026-01-01 03:00:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:16.635611 | orchestrator | 2026-01-01 03:00:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:16.635736 | orchestrator | 2026-01-01 03:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:19.684839 | orchestrator | 2026-01-01 03:00:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:19.685834 | orchestrator | 2026-01-01 03:00:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:19.685860 | orchestrator | 2026-01-01 03:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:22.730908 | orchestrator | 2026-01-01 03:00:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:22.731375 | orchestrator | 2026-01-01 03:00:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:22.731409 | orchestrator | 2026-01-01 03:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:25.779682 | orchestrator | 2026-01-01 03:00:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:25.781430 | orchestrator | 2026-01-01 03:00:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:25.781473 | orchestrator | 2026-01-01 03:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:28.832163 | orchestrator | 2026-01-01 03:00:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:28.832742 | orchestrator | 2026-01-01 03:00:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:28.832783 | orchestrator | 2026-01-01 03:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:31.884083 | orchestrator | 2026-01-01 03:00:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:31.886491 | orchestrator | 2026-01-01 03:00:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:31.886696 | orchestrator | 2026-01-01 03:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:34.936557 | orchestrator | 2026-01-01 03:00:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:34.938351 | orchestrator | 2026-01-01 03:00:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:34.938410 | orchestrator | 2026-01-01 03:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:37.988297 | orchestrator | 2026-01-01 03:00:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:37.989371 | orchestrator | 2026-01-01 03:00:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:37.989420 | orchestrator | 2026-01-01 03:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:41.048020 | orchestrator | 2026-01-01 03:00:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:41.049402 | orchestrator | 2026-01-01 03:00:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:41.049448 | orchestrator | 2026-01-01 03:00:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:44.096025 | orchestrator | 2026-01-01 03:00:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:44.096111 | orchestrator | 2026-01-01 03:00:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:44.096120 | orchestrator | 2026-01-01 03:00:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:47.139360 | orchestrator | 2026-01-01 03:00:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:47.141347 | orchestrator | 2026-01-01 03:00:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:47.141856 | orchestrator | 2026-01-01 03:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:50.183907 | orchestrator | 2026-01-01 03:00:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:50.185949 | orchestrator | 2026-01-01 03:00:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:50.186009 | orchestrator | 2026-01-01 03:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:53.231377 | orchestrator | 2026-01-01 03:00:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:53.232700 | orchestrator | 2026-01-01 03:00:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:53.232757 | orchestrator | 2026-01-01 03:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:56.287915 | orchestrator | 2026-01-01 03:00:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:56.291496 | orchestrator | 2026-01-01 03:00:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:56.291548 | orchestrator | 2026-01-01 03:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:59.344725 | orchestrator | 2026-01-01 03:00:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:00:59.346527 | orchestrator | 2026-01-01 03:00:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:00:59.346689 | orchestrator | 2026-01-01 03:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:02.402335 | orchestrator | 2026-01-01 03:01:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:02.403639 | orchestrator | 2026-01-01 03:01:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:02.403682 | orchestrator | 2026-01-01 03:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:05.452343 | orchestrator | 2026-01-01 03:01:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:05.454861 | orchestrator | 2026-01-01 03:01:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:05.454921 | orchestrator | 2026-01-01 03:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:08.512297 | orchestrator | 2026-01-01 03:01:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:08.513376 | orchestrator | 2026-01-01 03:01:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:08.513423 | orchestrator | 2026-01-01 03:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:11.565792 | orchestrator | 2026-01-01 03:01:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:11.569981 | orchestrator | 2026-01-01 03:01:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:11.570111 | orchestrator | 2026-01-01 03:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:14.625458 | orchestrator | 2026-01-01 03:01:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:14.627187 | orchestrator | 2026-01-01 03:01:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:14.627265 | orchestrator | 2026-01-01 03:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:17.679906 | orchestrator | 2026-01-01 03:01:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:17.683920 | orchestrator | 2026-01-01 03:01:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:17.683989 | orchestrator | 2026-01-01 03:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:20.734852 | orchestrator | 2026-01-01 03:01:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:20.735780 | orchestrator | 2026-01-01 03:01:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:20.735806 | orchestrator | 2026-01-01 03:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:23.784880 | orchestrator | 2026-01-01 03:01:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:23.787419 | orchestrator | 2026-01-01 03:01:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:23.787495 | orchestrator | 2026-01-01 03:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:26.839512 | orchestrator | 2026-01-01 03:01:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:26.841972 | orchestrator | 2026-01-01 03:01:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:26.842129 | orchestrator | 2026-01-01 03:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:29.900662 | orchestrator | 2026-01-01 03:01:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:29.903385 | orchestrator | 2026-01-01 03:01:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:29.903478 | orchestrator | 2026-01-01 03:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:32.961998 | orchestrator | 2026-01-01 03:01:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:32.963691 | orchestrator | 2026-01-01 03:01:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:32.963738 | orchestrator | 2026-01-01 03:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:36.019463 | orchestrator | 2026-01-01 03:01:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:36.021818 | orchestrator | 2026-01-01 03:01:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:36.021872 | orchestrator | 2026-01-01 03:01:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:39.072276 | orchestrator | 2026-01-01 03:01:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:39.075302 | orchestrator | 2026-01-01 03:01:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:39.075408 | orchestrator | 2026-01-01 03:01:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:42.120974 | orchestrator | 2026-01-01 03:01:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:42.123411 | orchestrator | 2026-01-01 03:01:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:42.123458 | orchestrator | 2026-01-01 03:01:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:45.187775 | orchestrator | 2026-01-01 03:01:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:45.188817 | orchestrator | 2026-01-01 03:01:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:45.188850 | orchestrator | 2026-01-01 03:01:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:48.226327 | orchestrator | 2026-01-01 03:01:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:48.226707 | orchestrator | 2026-01-01 03:01:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:48.226748 | orchestrator | 2026-01-01 03:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:51.282404 | orchestrator | 2026-01-01 03:01:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:51.283703 | orchestrator | 2026-01-01 03:01:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:51.283883 | orchestrator | 2026-01-01 03:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:54.338907 | orchestrator | 2026-01-01 03:01:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:54.340149 | orchestrator | 2026-01-01 03:01:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:54.340175 | orchestrator | 2026-01-01 03:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:57.395142 | orchestrator | 2026-01-01 03:01:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:01:57.397280 | orchestrator | 2026-01-01 03:01:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:01:57.397370 | orchestrator | 2026-01-01 03:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:00.451420 | orchestrator | 2026-01-01 03:02:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:00.454402 | orchestrator | 2026-01-01 03:02:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:00.454680 | orchestrator | 2026-01-01 03:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:03.500324 | orchestrator | 2026-01-01 03:02:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:03.501208 | orchestrator | 2026-01-01 03:02:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:03.501243 | orchestrator | 2026-01-01 03:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:06.547201 | orchestrator | 2026-01-01 03:02:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:06.550732 | orchestrator | 2026-01-01 03:02:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:06.550818 | orchestrator | 2026-01-01 03:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:09.604034 | orchestrator | 2026-01-01 03:02:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:09.606974 | orchestrator | 2026-01-01 03:02:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:09.607426 | orchestrator | 2026-01-01 03:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:12.664782 | orchestrator | 2026-01-01 03:02:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:12.665806 | orchestrator | 2026-01-01 03:02:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:12.665926 | orchestrator | 2026-01-01 03:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:15.711354 | orchestrator | 2026-01-01 03:02:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:15.711884 | orchestrator | 2026-01-01 03:02:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:15.713168 | orchestrator | 2026-01-01 03:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:18.770792 | orchestrator | 2026-01-01 03:02:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:18.773770 | orchestrator | 2026-01-01 03:02:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:18.773847 | orchestrator | 2026-01-01 03:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:21.823967 | orchestrator | 2026-01-01 03:02:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:21.826274 | orchestrator | 2026-01-01 03:02:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:21.826308 | orchestrator | 2026-01-01 03:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:24.872220 | orchestrator | 2026-01-01 03:02:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:24.874000 | orchestrator | 2026-01-01 03:02:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:24.874113 | orchestrator | 2026-01-01 03:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:27.928874 | orchestrator | 2026-01-01 03:02:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:27.929886 | orchestrator | 2026-01-01 03:02:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:27.929967 | orchestrator | 2026-01-01 03:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:30.984061 | orchestrator | 2026-01-01 03:02:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:30.985108 | orchestrator | 2026-01-01 03:02:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:30.985143 | orchestrator | 2026-01-01 03:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:34.036959 | orchestrator | 2026-01-01 03:02:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:34.039427 | orchestrator | 2026-01-01 03:02:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:34.039465 | orchestrator | 2026-01-01 03:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:37.086582 | orchestrator | 2026-01-01 03:02:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:37.087887 | orchestrator | 2026-01-01 03:02:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:37.087926 | orchestrator | 2026-01-01 03:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:40.142442 | orchestrator | 2026-01-01 03:02:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:40.145725 | orchestrator | 2026-01-01 03:02:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:40.145768 | orchestrator | 2026-01-01 03:02:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:43.198379 | orchestrator | 2026-01-01 03:02:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:43.199519 | orchestrator | 2026-01-01 03:02:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:43.199569 | orchestrator | 2026-01-01 03:02:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:46.250492 | orchestrator | 2026-01-01 03:02:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:46.251273 | orchestrator | 2026-01-01 03:02:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:46.251371 | orchestrator | 2026-01-01 03:02:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:49.310291 | orchestrator | 2026-01-01 03:02:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:49.315854 | orchestrator | 2026-01-01 03:02:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:49.315933 | orchestrator | 2026-01-01 03:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:52.358393 | orchestrator | 2026-01-01 03:02:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:52.360960 | orchestrator | 2026-01-01 03:02:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:52.361013 | orchestrator | 2026-01-01 03:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:55.407974 | orchestrator | 2026-01-01 03:02:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:55.409526 | orchestrator | 2026-01-01 03:02:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:55.409561 | orchestrator | 2026-01-01 03:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:58.466863 | orchestrator | 2026-01-01 03:02:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:02:58.468110 | orchestrator | 2026-01-01 03:02:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:02:58.468162 | orchestrator | 2026-01-01 03:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:01.515002 | orchestrator | 2026-01-01 03:03:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:01.516972 | orchestrator | 2026-01-01 03:03:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:01.517056 | orchestrator | 2026-01-01 03:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:04.566129 | orchestrator | 2026-01-01 03:03:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:04.569373 | orchestrator | 2026-01-01 03:03:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:04.569457 | orchestrator | 2026-01-01 03:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:07.630971 | orchestrator | 2026-01-01 03:03:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:07.634987 | orchestrator | 2026-01-01 03:03:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:07.635043 | orchestrator | 2026-01-01 03:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:10.682999 | orchestrator | 2026-01-01 03:03:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:10.684379 | orchestrator | 2026-01-01 03:03:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:10.684526 | orchestrator | 2026-01-01 03:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:13.737138 | orchestrator | 2026-01-01 03:03:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:13.739457 | orchestrator | 2026-01-01 03:03:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:13.739558 | orchestrator | 2026-01-01 03:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:16.791244 | orchestrator | 2026-01-01 03:03:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:16.791969 | orchestrator | 2026-01-01 03:03:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:16.794313 | orchestrator | 2026-01-01 03:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:19.854348 | orchestrator | 2026-01-01 03:03:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:19.856114 | orchestrator | 2026-01-01 03:03:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:19.856197 | orchestrator | 2026-01-01 03:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:22.911967 | orchestrator | 2026-01-01 03:03:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:22.913809 | orchestrator | 2026-01-01 03:03:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:22.913865 | orchestrator | 2026-01-01 03:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:25.963570 | orchestrator | 2026-01-01 03:03:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:25.967257 | orchestrator | 2026-01-01 03:03:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:25.967329 | orchestrator | 2026-01-01 03:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:29.017279 | orchestrator | 2026-01-01 03:03:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:29.018133 | orchestrator | 2026-01-01 03:03:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:29.018172 | orchestrator | 2026-01-01 03:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:32.061168 | orchestrator | 2026-01-01 03:03:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:32.061410 | orchestrator | 2026-01-01 03:03:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:32.061438 | orchestrator | 2026-01-01 03:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:35.114292 | orchestrator | 2026-01-01 03:03:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:35.114908 | orchestrator | 2026-01-01 03:03:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:35.114946 | orchestrator | 2026-01-01 03:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:38.167522 | orchestrator | 2026-01-01 03:03:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:38.169118 | orchestrator | 2026-01-01 03:03:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:38.169167 | orchestrator | 2026-01-01 03:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:41.225445 | orchestrator | 2026-01-01 03:03:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:41.226413 | orchestrator | 2026-01-01 03:03:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:41.226908 | orchestrator | 2026-01-01 03:03:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:44.277173 | orchestrator | 2026-01-01 03:03:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:44.278353 | orchestrator | 2026-01-01 03:03:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:44.278376 | orchestrator | 2026-01-01 03:03:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:47.324453 | orchestrator | 2026-01-01 03:03:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:47.325482 | orchestrator | 2026-01-01 03:03:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:47.325528 | orchestrator | 2026-01-01 03:03:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:50.378437 | orchestrator | 2026-01-01 03:03:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:50.381691 | orchestrator | 2026-01-01 03:03:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:50.381728 | orchestrator | 2026-01-01 03:03:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:53.435856 | orchestrator | 2026-01-01 03:03:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:53.437777 | orchestrator | 2026-01-01 03:03:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:53.437857 | orchestrator | 2026-01-01 03:03:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:56.483874 | orchestrator | 2026-01-01 03:03:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:56.485542 | orchestrator | 2026-01-01 03:03:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:56.485810 | orchestrator | 2026-01-01 03:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:59.541740 | orchestrator | 2026-01-01 03:03:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:03:59.542414 | orchestrator | 2026-01-01 03:03:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:03:59.542433 | orchestrator | 2026-01-01 03:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:02.582999 | orchestrator | 2026-01-01 03:04:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:02.583217 | orchestrator | 2026-01-01 03:04:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:02.583235 | orchestrator | 2026-01-01 03:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:05.632474 | orchestrator | 2026-01-01 03:04:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:05.635245 | orchestrator | 2026-01-01 03:04:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:05.635314 | orchestrator | 2026-01-01 03:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:08.676293 | orchestrator | 2026-01-01 03:04:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:08.677290 | orchestrator | 2026-01-01 03:04:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:08.677976 | orchestrator | 2026-01-01 03:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:11.729747 | orchestrator | 2026-01-01 03:04:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:11.731454 | orchestrator | 2026-01-01 03:04:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:11.731762 | orchestrator | 2026-01-01 03:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:14.784185 | orchestrator | 2026-01-01 03:04:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:14.784951 | orchestrator | 2026-01-01 03:04:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:14.784996 | orchestrator | 2026-01-01 03:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:17.825733 | orchestrator | 2026-01-01 03:04:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:17.826913 | orchestrator | 2026-01-01 03:04:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:17.826958 | orchestrator | 2026-01-01 03:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:20.881008 | orchestrator | 2026-01-01 03:04:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:20.882743 | orchestrator | 2026-01-01 03:04:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:20.882783 | orchestrator | 2026-01-01 03:04:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:23.915791 | orchestrator | 2026-01-01 03:04:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:23.917760 | orchestrator | 2026-01-01 03:04:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:23.917804 | orchestrator | 2026-01-01 03:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:26.962791 | orchestrator | 2026-01-01 03:04:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:26.963142 | orchestrator | 2026-01-01 03:04:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:26.963161 | orchestrator | 2026-01-01 03:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:30.021185 | orchestrator | 2026-01-01 03:04:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:30.027364 | orchestrator | 2026-01-01 03:04:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:30.027434 | orchestrator | 2026-01-01 03:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:33.067593 | orchestrator | 2026-01-01 03:04:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:33.069230 | orchestrator | 2026-01-01 03:04:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:33.069316 | orchestrator | 2026-01-01 03:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:36.109722 | orchestrator | 2026-01-01 03:04:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:36.111318 | orchestrator | 2026-01-01 03:04:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:36.111457 | orchestrator | 2026-01-01 03:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:39.164057 | orchestrator | 2026-01-01 03:04:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:39.164771 | orchestrator | 2026-01-01 03:04:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:39.164979 | orchestrator | 2026-01-01 03:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:42.219573 | orchestrator | 2026-01-01 03:04:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:42.222131 | orchestrator | 2026-01-01 03:04:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:42.222260 | orchestrator | 2026-01-01 03:04:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:45.272202 | orchestrator | 2026-01-01 03:04:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:45.274089 | orchestrator | 2026-01-01 03:04:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:45.274128 | orchestrator | 2026-01-01 03:04:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:48.326505 | orchestrator | 2026-01-01 03:04:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:48.328834 | orchestrator | 2026-01-01 03:04:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:48.328870 | orchestrator | 2026-01-01 03:04:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:51.386007 | orchestrator | 2026-01-01 03:04:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:51.388969 | orchestrator | 2026-01-01 03:04:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:51.389113 | orchestrator | 2026-01-01 03:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:54.444675 | orchestrator | 2026-01-01 03:04:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:54.446978 | orchestrator | 2026-01-01 03:04:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:54.447054 | orchestrator | 2026-01-01 03:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:57.495124 | orchestrator | 2026-01-01 03:04:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:04:57.498811 | orchestrator | 2026-01-01 03:04:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:04:57.498878 | orchestrator | 2026-01-01 03:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:00.550493 | orchestrator | 2026-01-01 03:05:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:00.551354 | orchestrator | 2026-01-01 03:05:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:00.551389 | orchestrator | 2026-01-01 03:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:03.599707 | orchestrator | 2026-01-01 03:05:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:03.600517 | orchestrator | 2026-01-01 03:05:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:03.600585 | orchestrator | 2026-01-01 03:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:06.651722 | orchestrator | 2026-01-01 03:05:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:06.653861 | orchestrator | 2026-01-01 03:05:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:06.653950 | orchestrator | 2026-01-01 03:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:09.711031 | orchestrator | 2026-01-01 03:05:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:09.713349 | orchestrator | 2026-01-01 03:05:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:09.713406 | orchestrator | 2026-01-01 03:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:12.763512 | orchestrator | 2026-01-01 03:05:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:12.765125 | orchestrator | 2026-01-01 03:05:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:12.765168 | orchestrator | 2026-01-01 03:05:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:15.815555 | orchestrator | 2026-01-01 03:05:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:15.817548 | orchestrator | 2026-01-01 03:05:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:15.817576 | orchestrator | 2026-01-01 03:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:18.866759 | orchestrator | 2026-01-01 03:05:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:18.868369 | orchestrator | 2026-01-01 03:05:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:18.868399 | orchestrator | 2026-01-01 03:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:21.918751 | orchestrator | 2026-01-01 03:05:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:21.922156 | orchestrator | 2026-01-01 03:05:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:21.922226 | orchestrator | 2026-01-01 03:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:24.967165 | orchestrator | 2026-01-01 03:05:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:24.968532 | orchestrator | 2026-01-01 03:05:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:24.968568 | orchestrator | 2026-01-01 03:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:28.015527 | orchestrator | 2026-01-01 03:05:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:28.017304 | orchestrator | 2026-01-01 03:05:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:28.017346 | orchestrator | 2026-01-01 03:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:31.079304 | orchestrator | 2026-01-01 03:05:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:31.080873 | orchestrator | 2026-01-01 03:05:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:31.080921 | orchestrator | 2026-01-01 03:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:34.131906 | orchestrator | 2026-01-01 03:05:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:34.133274 | orchestrator | 2026-01-01 03:05:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:34.133313 | orchestrator | 2026-01-01 03:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:37.178387 | orchestrator | 2026-01-01 03:05:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:37.179994 | orchestrator | 2026-01-01 03:05:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:37.180020 | orchestrator | 2026-01-01 03:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:40.230531 | orchestrator | 2026-01-01 03:05:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:40.230988 | orchestrator | 2026-01-01 03:05:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:40.231010 | orchestrator | 2026-01-01 03:05:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:43.282213 | orchestrator | 2026-01-01 03:05:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:43.283821 | orchestrator | 2026-01-01 03:05:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:43.283879 | orchestrator | 2026-01-01 03:05:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:46.322002 | orchestrator | 2026-01-01 03:05:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:46.322142 | orchestrator | 2026-01-01 03:05:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:46.322196 | orchestrator | 2026-01-01 03:05:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:49.379512 | orchestrator | 2026-01-01 03:05:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:49.380738 | orchestrator | 2026-01-01 03:05:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:49.380931 | orchestrator | 2026-01-01 03:05:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:52.431437 | orchestrator | 2026-01-01 03:05:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:52.433399 | orchestrator | 2026-01-01 03:05:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:52.433451 | orchestrator | 2026-01-01 03:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:55.475401 | orchestrator | 2026-01-01 03:05:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:55.476585 | orchestrator | 2026-01-01 03:05:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:55.476687 | orchestrator | 2026-01-01 03:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:58.525594 | orchestrator | 2026-01-01 03:05:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:05:58.528414 | orchestrator | 2026-01-01 03:05:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:05:58.528494 | orchestrator | 2026-01-01 03:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:01.571168 | orchestrator | 2026-01-01 03:06:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:01.574068 | orchestrator | 2026-01-01 03:06:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:01.574178 | orchestrator | 2026-01-01 03:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:04.625720 | orchestrator | 2026-01-01 03:06:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:04.628538 | orchestrator | 2026-01-01 03:06:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:04.628587 | orchestrator | 2026-01-01 03:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:07.676583 | orchestrator | 2026-01-01 03:06:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:07.677853 | orchestrator | 2026-01-01 03:06:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:07.677901 | orchestrator | 2026-01-01 03:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:10.724320 | orchestrator | 2026-01-01 03:06:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:10.725022 | orchestrator | 2026-01-01 03:06:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:10.725067 | orchestrator | 2026-01-01 03:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:13.784761 | orchestrator | 2026-01-01 03:06:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:13.785487 | orchestrator | 2026-01-01 03:06:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:13.785898 | orchestrator | 2026-01-01 03:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:16.834299 | orchestrator | 2026-01-01 03:06:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:16.835274 | orchestrator | 2026-01-01 03:06:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:16.835322 | orchestrator | 2026-01-01 03:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:19.889147 | orchestrator | 2026-01-01 03:06:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:19.890139 | orchestrator | 2026-01-01 03:06:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:19.890220 | orchestrator | 2026-01-01 03:06:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:22.934115 | orchestrator | 2026-01-01 03:06:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:22.934519 | orchestrator | 2026-01-01 03:06:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:22.934577 | orchestrator | 2026-01-01 03:06:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:25.982416 | orchestrator | 2026-01-01 03:06:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:25.984550 | orchestrator | 2026-01-01 03:06:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:25.984686 | orchestrator | 2026-01-01 03:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:29.028054 | orchestrator | 2026-01-01 03:06:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:29.030667 | orchestrator | 2026-01-01 03:06:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:29.030729 | orchestrator | 2026-01-01 03:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:32.075381 | orchestrator | 2026-01-01 03:06:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:32.076114 | orchestrator | 2026-01-01 03:06:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:32.076149 | orchestrator | 2026-01-01 03:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:35.119097 | orchestrator | 2026-01-01 03:06:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:35.119875 | orchestrator | 2026-01-01 03:06:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:35.119924 | orchestrator | 2026-01-01 03:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:38.161706 | orchestrator | 2026-01-01 03:06:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:38.162489 | orchestrator | 2026-01-01 03:06:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:38.162536 | orchestrator | 2026-01-01 03:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:41.212391 | orchestrator | 2026-01-01 03:06:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:41.214921 | orchestrator | 2026-01-01 03:06:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:41.214994 | orchestrator | 2026-01-01 03:06:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:44.274868 | orchestrator | 2026-01-01 03:06:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:44.276828 | orchestrator | 2026-01-01 03:06:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:44.276914 | orchestrator | 2026-01-01 03:06:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:47.329800 | orchestrator | 2026-01-01 03:06:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:47.331562 | orchestrator | 2026-01-01 03:06:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:47.331637 | orchestrator | 2026-01-01 03:06:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:50.378539 | orchestrator | 2026-01-01 03:06:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:50.378852 | orchestrator | 2026-01-01 03:06:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:50.378881 | orchestrator | 2026-01-01 03:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:53.432076 | orchestrator | 2026-01-01 03:06:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:53.432165 | orchestrator | 2026-01-01 03:06:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:53.432171 | orchestrator | 2026-01-01 03:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:56.479922 | orchestrator | 2026-01-01 03:06:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:56.482625 | orchestrator | 2026-01-01 03:06:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:56.482692 | orchestrator | 2026-01-01 03:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:59.542645 | orchestrator | 2026-01-01 03:06:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:06:59.544345 | orchestrator | 2026-01-01 03:06:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:06:59.544416 | orchestrator | 2026-01-01 03:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:02.591495 | orchestrator | 2026-01-01 03:07:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:02.595339 | orchestrator | 2026-01-01 03:07:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:02.595384 | orchestrator | 2026-01-01 03:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:05.640901 | orchestrator | 2026-01-01 03:07:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:05.641853 | orchestrator | 2026-01-01 03:07:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:05.641872 | orchestrator | 2026-01-01 03:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:08.692628 | orchestrator | 2026-01-01 03:07:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:08.693853 | orchestrator | 2026-01-01 03:07:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:08.693945 | orchestrator | 2026-01-01 03:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:11.741956 | orchestrator | 2026-01-01 03:07:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:11.742246 | orchestrator | 2026-01-01 03:07:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:11.742273 | orchestrator | 2026-01-01 03:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:14.783312 | orchestrator | 2026-01-01 03:07:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:14.784137 | orchestrator | 2026-01-01 03:07:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:14.784241 | orchestrator | 2026-01-01 03:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:17.835674 | orchestrator | 2026-01-01 03:07:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:17.835956 | orchestrator | 2026-01-01 03:07:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:17.835986 | orchestrator | 2026-01-01 03:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:20.887518 | orchestrator | 2026-01-01 03:07:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:20.889328 | orchestrator | 2026-01-01 03:07:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:20.889379 | orchestrator | 2026-01-01 03:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:23.937847 | orchestrator | 2026-01-01 03:07:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:23.938838 | orchestrator | 2026-01-01 03:07:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:23.938868 | orchestrator | 2026-01-01 03:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:26.994156 | orchestrator | 2026-01-01 03:07:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:26.996425 | orchestrator | 2026-01-01 03:07:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:26.996531 | orchestrator | 2026-01-01 03:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:30.044518 | orchestrator | 2026-01-01 03:07:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:30.045822 | orchestrator | 2026-01-01 03:07:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:30.045880 | orchestrator | 2026-01-01 03:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:33.087442 | orchestrator | 2026-01-01 03:07:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:33.088806 | orchestrator | 2026-01-01 03:07:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:33.089135 | orchestrator | 2026-01-01 03:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:36.138568 | orchestrator | 2026-01-01 03:07:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:36.139010 | orchestrator | 2026-01-01 03:07:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:36.139052 | orchestrator | 2026-01-01 03:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:39.186569 | orchestrator | 2026-01-01 03:07:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:39.189761 | orchestrator | 2026-01-01 03:07:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:39.189841 | orchestrator | 2026-01-01 03:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:42.242163 | orchestrator | 2026-01-01 03:07:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:42.243422 | orchestrator | 2026-01-01 03:07:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:42.243492 | orchestrator | 2026-01-01 03:07:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:45.296235 | orchestrator | 2026-01-01 03:07:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:45.298458 | orchestrator | 2026-01-01 03:07:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:45.298540 | orchestrator | 2026-01-01 03:07:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:48.348128 | orchestrator | 2026-01-01 03:07:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:48.350361 | orchestrator | 2026-01-01 03:07:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:48.350408 | orchestrator | 2026-01-01 03:07:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:51.392638 | orchestrator | 2026-01-01 03:07:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:51.394734 | orchestrator | 2026-01-01 03:07:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:51.394817 | orchestrator | 2026-01-01 03:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:54.449736 | orchestrator | 2026-01-01 03:07:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:54.451073 | orchestrator | 2026-01-01 03:07:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:54.451141 | orchestrator | 2026-01-01 03:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:57.506717 | orchestrator | 2026-01-01 03:07:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:07:57.509186 | orchestrator | 2026-01-01 03:07:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:07:57.509217 | orchestrator | 2026-01-01 03:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:00.558876 | orchestrator | 2026-01-01 03:08:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:00.561011 | orchestrator | 2026-01-01 03:08:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:00.561029 | orchestrator | 2026-01-01 03:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:03.606846 | orchestrator | 2026-01-01 03:08:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:03.608033 | orchestrator | 2026-01-01 03:08:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:03.608058 | orchestrator | 2026-01-01 03:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:06.661072 | orchestrator | 2026-01-01 03:08:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:06.663014 | orchestrator | 2026-01-01 03:08:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:06.663362 | orchestrator | 2026-01-01 03:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:09.715388 | orchestrator | 2026-01-01 03:08:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:09.717227 | orchestrator | 2026-01-01 03:08:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:09.717285 | orchestrator | 2026-01-01 03:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:12.777957 | orchestrator | 2026-01-01 03:08:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:12.779993 | orchestrator | 2026-01-01 03:08:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:12.780030 | orchestrator | 2026-01-01 03:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:15.830947 | orchestrator | 2026-01-01 03:08:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:15.832811 | orchestrator | 2026-01-01 03:08:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:15.832952 | orchestrator | 2026-01-01 03:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:18.884735 | orchestrator | 2026-01-01 03:08:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:18.885434 | orchestrator | 2026-01-01 03:08:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:18.885755 | orchestrator | 2026-01-01 03:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:21.932028 | orchestrator | 2026-01-01 03:08:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:21.933211 | orchestrator | 2026-01-01 03:08:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:21.933248 | orchestrator | 2026-01-01 03:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:24.981517 | orchestrator | 2026-01-01 03:08:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:24.981850 | orchestrator | 2026-01-01 03:08:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:24.981892 | orchestrator | 2026-01-01 03:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:28.037229 | orchestrator | 2026-01-01 03:08:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:28.039657 | orchestrator | 2026-01-01 03:08:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:28.039738 | orchestrator | 2026-01-01 03:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:31.076505 | orchestrator | 2026-01-01 03:08:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:31.077277 | orchestrator | 2026-01-01 03:08:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:31.077308 | orchestrator | 2026-01-01 03:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:34.120816 | orchestrator | 2026-01-01 03:08:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:34.123159 | orchestrator | 2026-01-01 03:08:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:34.123317 | orchestrator | 2026-01-01 03:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:37.166556 | orchestrator | 2026-01-01 03:08:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:37.168420 | orchestrator | 2026-01-01 03:08:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:37.168550 | orchestrator | 2026-01-01 03:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:40.222377 | orchestrator | 2026-01-01 03:08:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:40.224908 | orchestrator | 2026-01-01 03:08:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:40.224970 | orchestrator | 2026-01-01 03:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:43.275598 | orchestrator | 2026-01-01 03:08:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:43.277156 | orchestrator | 2026-01-01 03:08:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:43.277211 | orchestrator | 2026-01-01 03:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:46.334691 | orchestrator | 2026-01-01 03:08:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:46.334765 | orchestrator | 2026-01-01 03:08:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:46.334775 | orchestrator | 2026-01-01 03:08:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:49.387742 | orchestrator | 2026-01-01 03:08:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:49.392367 | orchestrator | 2026-01-01 03:08:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:49.392478 | orchestrator | 2026-01-01 03:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:52.444695 | orchestrator | 2026-01-01 03:08:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:52.448467 | orchestrator | 2026-01-01 03:08:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:52.448550 | orchestrator | 2026-01-01 03:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:55.507863 | orchestrator | 2026-01-01 03:08:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:55.509424 | orchestrator | 2026-01-01 03:08:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:55.509493 | orchestrator | 2026-01-01 03:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:58.565891 | orchestrator | 2026-01-01 03:08:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:08:58.566749 | orchestrator | 2026-01-01 03:08:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:08:58.566790 | orchestrator | 2026-01-01 03:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:01.619141 | orchestrator | 2026-01-01 03:09:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:01.620694 | orchestrator | 2026-01-01 03:09:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:01.620745 | orchestrator | 2026-01-01 03:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:04.669476 | orchestrator | 2026-01-01 03:09:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:04.671438 | orchestrator | 2026-01-01 03:09:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:04.671698 | orchestrator | 2026-01-01 03:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:07.714095 | orchestrator | 2026-01-01 03:09:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:07.716338 | orchestrator | 2026-01-01 03:09:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:07.716389 | orchestrator | 2026-01-01 03:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:10.768986 | orchestrator | 2026-01-01 03:09:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:10.771603 | orchestrator | 2026-01-01 03:09:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:10.771827 | orchestrator | 2026-01-01 03:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:13.826856 | orchestrator | 2026-01-01 03:09:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:13.828037 | orchestrator | 2026-01-01 03:09:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:13.828814 | orchestrator | 2026-01-01 03:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:16.879132 | orchestrator | 2026-01-01 03:09:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:16.879771 | orchestrator | 2026-01-01 03:09:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:16.879790 | orchestrator | 2026-01-01 03:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:19.934105 | orchestrator | 2026-01-01 03:09:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:19.935934 | orchestrator | 2026-01-01 03:09:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:19.936018 | orchestrator | 2026-01-01 03:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:22.985959 | orchestrator | 2026-01-01 03:09:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:22.987984 | orchestrator | 2026-01-01 03:09:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:22.988042 | orchestrator | 2026-01-01 03:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:26.037493 | orchestrator | 2026-01-01 03:09:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:26.039132 | orchestrator | 2026-01-01 03:09:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:26.039202 | orchestrator | 2026-01-01 03:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:29.090132 | orchestrator | 2026-01-01 03:09:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:29.091613 | orchestrator | 2026-01-01 03:09:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:29.091636 | orchestrator | 2026-01-01 03:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:32.138518 | orchestrator | 2026-01-01 03:09:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:32.140716 | orchestrator | 2026-01-01 03:09:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:32.140767 | orchestrator | 2026-01-01 03:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:35.189097 | orchestrator | 2026-01-01 03:09:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:35.190820 | orchestrator | 2026-01-01 03:09:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:35.190884 | orchestrator | 2026-01-01 03:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:38.240312 | orchestrator | 2026-01-01 03:09:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:38.242792 | orchestrator | 2026-01-01 03:09:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:38.242908 | orchestrator | 2026-01-01 03:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:41.286434 | orchestrator | 2026-01-01 03:09:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:41.287329 | orchestrator | 2026-01-01 03:09:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:41.287383 | orchestrator | 2026-01-01 03:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:44.334404 | orchestrator | 2026-01-01 03:09:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:44.336868 | orchestrator | 2026-01-01 03:09:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:44.336917 | orchestrator | 2026-01-01 03:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:47.387153 | orchestrator | 2026-01-01 03:09:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:47.388794 | orchestrator | 2026-01-01 03:09:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:47.388831 | orchestrator | 2026-01-01 03:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:50.429052 | orchestrator | 2026-01-01 03:09:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:50.429700 | orchestrator | 2026-01-01 03:09:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:50.429736 | orchestrator | 2026-01-01 03:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:53.469168 | orchestrator | 2026-01-01 03:09:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:53.470631 | orchestrator | 2026-01-01 03:09:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:53.470760 | orchestrator | 2026-01-01 03:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:56.524632 | orchestrator | 2026-01-01 03:09:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:56.526218 | orchestrator | 2026-01-01 03:09:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:56.526284 | orchestrator | 2026-01-01 03:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:59.582781 | orchestrator | 2026-01-01 03:09:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:09:59.583641 | orchestrator | 2026-01-01 03:09:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:09:59.583683 | orchestrator | 2026-01-01 03:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:02.629323 | orchestrator | 2026-01-01 03:10:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:02.631938 | orchestrator | 2026-01-01 03:10:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:02.631974 | orchestrator | 2026-01-01 03:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:05.684540 | orchestrator | 2026-01-01 03:10:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:05.687098 | orchestrator | 2026-01-01 03:10:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:05.687173 | orchestrator | 2026-01-01 03:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:08.737794 | orchestrator | 2026-01-01 03:10:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:08.740058 | orchestrator | 2026-01-01 03:10:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:08.740116 | orchestrator | 2026-01-01 03:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:11.788135 | orchestrator | 2026-01-01 03:10:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:11.791152 | orchestrator | 2026-01-01 03:10:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:11.791205 | orchestrator | 2026-01-01 03:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:14.848809 | orchestrator | 2026-01-01 03:10:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:14.850379 | orchestrator | 2026-01-01 03:10:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:14.850433 | orchestrator | 2026-01-01 03:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:17.888055 | orchestrator | 2026-01-01 03:10:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:17.888256 | orchestrator | 2026-01-01 03:10:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:17.888279 | orchestrator | 2026-01-01 03:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:20.933060 | orchestrator | 2026-01-01 03:10:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:20.935301 | orchestrator | 2026-01-01 03:10:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:20.935374 | orchestrator | 2026-01-01 03:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:23.984559 | orchestrator | 2026-01-01 03:10:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:23.986439 | orchestrator | 2026-01-01 03:10:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:23.986505 | orchestrator | 2026-01-01 03:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:27.037327 | orchestrator | 2026-01-01 03:10:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:27.039521 | orchestrator | 2026-01-01 03:10:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:27.039601 | orchestrator | 2026-01-01 03:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:30.091072 | orchestrator | 2026-01-01 03:10:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:30.091966 | orchestrator | 2026-01-01 03:10:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:30.091991 | orchestrator | 2026-01-01 03:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:33.127739 | orchestrator | 2026-01-01 03:10:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:33.128949 | orchestrator | 2026-01-01 03:10:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:33.128994 | orchestrator | 2026-01-01 03:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:36.176507 | orchestrator | 2026-01-01 03:10:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:36.179103 | orchestrator | 2026-01-01 03:10:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:36.179200 | orchestrator | 2026-01-01 03:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:39.220507 | orchestrator | 2026-01-01 03:10:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:39.221139 | orchestrator | 2026-01-01 03:10:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:39.221214 | orchestrator | 2026-01-01 03:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:42.274124 | orchestrator | 2026-01-01 03:10:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:42.280062 | orchestrator | 2026-01-01 03:10:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:42.280166 | orchestrator | 2026-01-01 03:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:45.329478 | orchestrator | 2026-01-01 03:10:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:45.331075 | orchestrator | 2026-01-01 03:10:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:45.331173 | orchestrator | 2026-01-01 03:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:48.376898 | orchestrator | 2026-01-01 03:10:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:48.377179 | orchestrator | 2026-01-01 03:10:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:48.377252 | orchestrator | 2026-01-01 03:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:51.430159 | orchestrator | 2026-01-01 03:10:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:51.432138 | orchestrator | 2026-01-01 03:10:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:51.432213 | orchestrator | 2026-01-01 03:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:54.480268 | orchestrator | 2026-01-01 03:10:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:54.481795 | orchestrator | 2026-01-01 03:10:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:54.481852 | orchestrator | 2026-01-01 03:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:57.529521 | orchestrator | 2026-01-01 03:10:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:10:57.531579 | orchestrator | 2026-01-01 03:10:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:10:57.531678 | orchestrator | 2026-01-01 03:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:00.583543 | orchestrator | 2026-01-01 03:11:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:00.586011 | orchestrator | 2026-01-01 03:11:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:00.586082 | orchestrator | 2026-01-01 03:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:03.635867 | orchestrator | 2026-01-01 03:11:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:03.636837 | orchestrator | 2026-01-01 03:11:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:03.636876 | orchestrator | 2026-01-01 03:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:06.680265 | orchestrator | 2026-01-01 03:11:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:06.681490 | orchestrator | 2026-01-01 03:11:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:06.681520 | orchestrator | 2026-01-01 03:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:09.731049 | orchestrator | 2026-01-01 03:11:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:09.733216 | orchestrator | 2026-01-01 03:11:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:09.733262 | orchestrator | 2026-01-01 03:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:12.785276 | orchestrator | 2026-01-01 03:11:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:12.788865 | orchestrator | 2026-01-01 03:11:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:12.788932 | orchestrator | 2026-01-01 03:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:15.834874 | orchestrator | 2026-01-01 03:11:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:15.835289 | orchestrator | 2026-01-01 03:11:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:15.835318 | orchestrator | 2026-01-01 03:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:18.887070 | orchestrator | 2026-01-01 03:11:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:18.889625 | orchestrator | 2026-01-01 03:11:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:18.889746 | orchestrator | 2026-01-01 03:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:21.942818 | orchestrator | 2026-01-01 03:11:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:21.944997 | orchestrator | 2026-01-01 03:11:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:21.945043 | orchestrator | 2026-01-01 03:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:24.999075 | orchestrator | 2026-01-01 03:11:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:25.001042 | orchestrator | 2026-01-01 03:11:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:25.001225 | orchestrator | 2026-01-01 03:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:28.043892 | orchestrator | 2026-01-01 03:11:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:28.044951 | orchestrator | 2026-01-01 03:11:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:28.044986 | orchestrator | 2026-01-01 03:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:31.095110 | orchestrator | 2026-01-01 03:11:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:31.095657 | orchestrator | 2026-01-01 03:11:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:31.095720 | orchestrator | 2026-01-01 03:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:34.132188 | orchestrator | 2026-01-01 03:11:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:34.133503 | orchestrator | 2026-01-01 03:11:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:34.133536 | orchestrator | 2026-01-01 03:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:37.189767 | orchestrator | 2026-01-01 03:11:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:37.190613 | orchestrator | 2026-01-01 03:11:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:37.190839 | orchestrator | 2026-01-01 03:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:40.241523 | orchestrator | 2026-01-01 03:11:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:40.243249 | orchestrator | 2026-01-01 03:11:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:40.243306 | orchestrator | 2026-01-01 03:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:43.293867 | orchestrator | 2026-01-01 03:11:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:43.298137 | orchestrator | 2026-01-01 03:11:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:43.298997 | orchestrator | 2026-01-01 03:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:46.345685 | orchestrator | 2026-01-01 03:11:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:46.347788 | orchestrator | 2026-01-01 03:11:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:46.347838 | orchestrator | 2026-01-01 03:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:49.401147 | orchestrator | 2026-01-01 03:11:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:49.401491 | orchestrator | 2026-01-01 03:11:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:49.401510 | orchestrator | 2026-01-01 03:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:52.451791 | orchestrator | 2026-01-01 03:11:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:52.452736 | orchestrator | 2026-01-01 03:11:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:52.452776 | orchestrator | 2026-01-01 03:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:55.497321 | orchestrator | 2026-01-01 03:11:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:55.499202 | orchestrator | 2026-01-01 03:11:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:55.499272 | orchestrator | 2026-01-01 03:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:58.547224 | orchestrator | 2026-01-01 03:11:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:11:58.548346 | orchestrator | 2026-01-01 03:11:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:11:58.548381 | orchestrator | 2026-01-01 03:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:01.598883 | orchestrator | 2026-01-01 03:12:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:01.601441 | orchestrator | 2026-01-01 03:12:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:01.601489 | orchestrator | 2026-01-01 03:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:04.653147 | orchestrator | 2026-01-01 03:12:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:04.654869 | orchestrator | 2026-01-01 03:12:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:04.654925 | orchestrator | 2026-01-01 03:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:07.702847 | orchestrator | 2026-01-01 03:12:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:07.704297 | orchestrator | 2026-01-01 03:12:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:07.704355 | orchestrator | 2026-01-01 03:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:10.750091 | orchestrator | 2026-01-01 03:12:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:10.750581 | orchestrator | 2026-01-01 03:12:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:10.750671 | orchestrator | 2026-01-01 03:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:13.802766 | orchestrator | 2026-01-01 03:12:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:13.803513 | orchestrator | 2026-01-01 03:12:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:13.803578 | orchestrator | 2026-01-01 03:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:16.853501 | orchestrator | 2026-01-01 03:12:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:16.856239 | orchestrator | 2026-01-01 03:12:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:16.856319 | orchestrator | 2026-01-01 03:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:19.907025 | orchestrator | 2026-01-01 03:12:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:19.907279 | orchestrator | 2026-01-01 03:12:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:19.907306 | orchestrator | 2026-01-01 03:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:22.947999 | orchestrator | 2026-01-01 03:12:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:22.949872 | orchestrator | 2026-01-01 03:12:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:22.950157 | orchestrator | 2026-01-01 03:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:26.002498 | orchestrator | 2026-01-01 03:12:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:26.005572 | orchestrator | 2026-01-01 03:12:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:26.005806 | orchestrator | 2026-01-01 03:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:29.060799 | orchestrator | 2026-01-01 03:12:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:29.061226 | orchestrator | 2026-01-01 03:12:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:29.061253 | orchestrator | 2026-01-01 03:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:32.110978 | orchestrator | 2026-01-01 03:12:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:32.113308 | orchestrator | 2026-01-01 03:12:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:32.113379 | orchestrator | 2026-01-01 03:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:35.165419 | orchestrator | 2026-01-01 03:12:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:35.167032 | orchestrator | 2026-01-01 03:12:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:35.167073 | orchestrator | 2026-01-01 03:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:38.208686 | orchestrator | 2026-01-01 03:12:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:38.210806 | orchestrator | 2026-01-01 03:12:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:38.210859 | orchestrator | 2026-01-01 03:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:41.250907 | orchestrator | 2026-01-01 03:12:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:41.251112 | orchestrator | 2026-01-01 03:12:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:41.251130 | orchestrator | 2026-01-01 03:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:44.294723 | orchestrator | 2026-01-01 03:12:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:44.294855 | orchestrator | 2026-01-01 03:12:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:44.294868 | orchestrator | 2026-01-01 03:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:47.339385 | orchestrator | 2026-01-01 03:12:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:47.340586 | orchestrator | 2026-01-01 03:12:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:47.341418 | orchestrator | 2026-01-01 03:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:50.390770 | orchestrator | 2026-01-01 03:12:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:50.391956 | orchestrator | 2026-01-01 03:12:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:50.392005 | orchestrator | 2026-01-01 03:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:53.438214 | orchestrator | 2026-01-01 03:12:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:53.439647 | orchestrator | 2026-01-01 03:12:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:53.439678 | orchestrator | 2026-01-01 03:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:56.485467 | orchestrator | 2026-01-01 03:12:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:56.487188 | orchestrator | 2026-01-01 03:12:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:56.487237 | orchestrator | 2026-01-01 03:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:59.534272 | orchestrator | 2026-01-01 03:12:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:12:59.536724 | orchestrator | 2026-01-01 03:12:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:12:59.536821 | orchestrator | 2026-01-01 03:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:02.581695 | orchestrator | 2026-01-01 03:13:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:02.583229 | orchestrator | 2026-01-01 03:13:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:02.583277 | orchestrator | 2026-01-01 03:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:05.629773 | orchestrator | 2026-01-01 03:13:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:05.630918 | orchestrator | 2026-01-01 03:13:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:05.630940 | orchestrator | 2026-01-01 03:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:08.675217 | orchestrator | 2026-01-01 03:13:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:08.677033 | orchestrator | 2026-01-01 03:13:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:08.677079 | orchestrator | 2026-01-01 03:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:11.727571 | orchestrator | 2026-01-01 03:13:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:11.728495 | orchestrator | 2026-01-01 03:13:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:11.728529 | orchestrator | 2026-01-01 03:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:14.776217 | orchestrator | 2026-01-01 03:13:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:14.778655 | orchestrator | 2026-01-01 03:13:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:14.778953 | orchestrator | 2026-01-01 03:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:17.827144 | orchestrator | 2026-01-01 03:13:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:17.828572 | orchestrator | 2026-01-01 03:13:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:17.829189 | orchestrator | 2026-01-01 03:13:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:20.864361 | orchestrator | 2026-01-01 03:13:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:20.864591 | orchestrator | 2026-01-01 03:13:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:20.864731 | orchestrator | 2026-01-01 03:13:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:23.904974 | orchestrator | 2026-01-01 03:13:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:23.905324 | orchestrator | 2026-01-01 03:13:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:23.905438 | orchestrator | 2026-01-01 03:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:26.955144 | orchestrator | 2026-01-01 03:13:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:26.957266 | orchestrator | 2026-01-01 03:13:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:26.957297 | orchestrator | 2026-01-01 03:13:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:30.006415 | orchestrator | 2026-01-01 03:13:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:30.010672 | orchestrator | 2026-01-01 03:13:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:30.010747 | orchestrator | 2026-01-01 03:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:33.059549 | orchestrator | 2026-01-01 03:13:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:33.063509 | orchestrator | 2026-01-01 03:13:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:33.063560 | orchestrator | 2026-01-01 03:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:36.115525 | orchestrator | 2026-01-01 03:13:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:36.117025 | orchestrator | 2026-01-01 03:13:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:36.117051 | orchestrator | 2026-01-01 03:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:39.169531 | orchestrator | 2026-01-01 03:13:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:39.170649 | orchestrator | 2026-01-01 03:13:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:39.170701 | orchestrator | 2026-01-01 03:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:42.218296 | orchestrator | 2026-01-01 03:13:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:42.219156 | orchestrator | 2026-01-01 03:13:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:42.219191 | orchestrator | 2026-01-01 03:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:45.271059 | orchestrator | 2026-01-01 03:13:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:45.274796 | orchestrator | 2026-01-01 03:13:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:45.275150 | orchestrator | 2026-01-01 03:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:48.326957 | orchestrator | 2026-01-01 03:13:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:48.328438 | orchestrator | 2026-01-01 03:13:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:48.328455 | orchestrator | 2026-01-01 03:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:51.378519 | orchestrator | 2026-01-01 03:13:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:51.378599 | orchestrator | 2026-01-01 03:13:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:51.378655 | orchestrator | 2026-01-01 03:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:54.429307 | orchestrator | 2026-01-01 03:13:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:54.430817 | orchestrator | 2026-01-01 03:13:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:54.430861 | orchestrator | 2026-01-01 03:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:57.474400 | orchestrator | 2026-01-01 03:13:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:13:57.475193 | orchestrator | 2026-01-01 03:13:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:13:57.475260 | orchestrator | 2026-01-01 03:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:00.521512 | orchestrator | 2026-01-01 03:14:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:00.522099 | orchestrator | 2026-01-01 03:14:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:00.522140 | orchestrator | 2026-01-01 03:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:03.572572 | orchestrator | 2026-01-01 03:14:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:03.574253 | orchestrator | 2026-01-01 03:14:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:03.574359 | orchestrator | 2026-01-01 03:14:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:06.615974 | orchestrator | 2026-01-01 03:14:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:06.619269 | orchestrator | 2026-01-01 03:14:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:06.619338 | orchestrator | 2026-01-01 03:14:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:09.664711 | orchestrator | 2026-01-01 03:14:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:09.668015 | orchestrator | 2026-01-01 03:14:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:09.668053 | orchestrator | 2026-01-01 03:14:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:12.714009 | orchestrator | 2026-01-01 03:14:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:12.715914 | orchestrator | 2026-01-01 03:14:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:12.715967 | orchestrator | 2026-01-01 03:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:15.769084 | orchestrator | 2026-01-01 03:14:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:15.771110 | orchestrator | 2026-01-01 03:14:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:15.771198 | orchestrator | 2026-01-01 03:14:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:18.819155 | orchestrator | 2026-01-01 03:14:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:18.821963 | orchestrator | 2026-01-01 03:14:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:18.822067 | orchestrator | 2026-01-01 03:14:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:21.876349 | orchestrator | 2026-01-01 03:14:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:21.878702 | orchestrator | 2026-01-01 03:14:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:21.878854 | orchestrator | 2026-01-01 03:14:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:24.929914 | orchestrator | 2026-01-01 03:14:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:24.930979 | orchestrator | 2026-01-01 03:14:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:24.930999 | orchestrator | 2026-01-01 03:14:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:27.987932 | orchestrator | 2026-01-01 03:14:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:27.990394 | orchestrator | 2026-01-01 03:14:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:27.990449 | orchestrator | 2026-01-01 03:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:31.044498 | orchestrator | 2026-01-01 03:14:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:31.046264 | orchestrator | 2026-01-01 03:14:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:31.046320 | orchestrator | 2026-01-01 03:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:34.093271 | orchestrator | 2026-01-01 03:14:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:34.094478 | orchestrator | 2026-01-01 03:14:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:34.094515 | orchestrator | 2026-01-01 03:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:37.149019 | orchestrator | 2026-01-01 03:14:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:37.149493 | orchestrator | 2026-01-01 03:14:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:37.149529 | orchestrator | 2026-01-01 03:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:40.191426 | orchestrator | 2026-01-01 03:14:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:40.193572 | orchestrator | 2026-01-01 03:14:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:40.193671 | orchestrator | 2026-01-01 03:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:43.251852 | orchestrator | 2026-01-01 03:14:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:43.252720 | orchestrator | 2026-01-01 03:14:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:43.252761 | orchestrator | 2026-01-01 03:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:46.301094 | orchestrator | 2026-01-01 03:14:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:46.302344 | orchestrator | 2026-01-01 03:14:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:46.302433 | orchestrator | 2026-01-01 03:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:49.357119 | orchestrator | 2026-01-01 03:14:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:49.358390 | orchestrator | 2026-01-01 03:14:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:49.358527 | orchestrator | 2026-01-01 03:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:52.416152 | orchestrator | 2026-01-01 03:14:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:52.418795 | orchestrator | 2026-01-01 03:14:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:52.418841 | orchestrator | 2026-01-01 03:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:55.462367 | orchestrator | 2026-01-01 03:14:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:55.465800 | orchestrator | 2026-01-01 03:14:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:55.465857 | orchestrator | 2026-01-01 03:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:58.524935 | orchestrator | 2026-01-01 03:14:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:14:58.527226 | orchestrator | 2026-01-01 03:14:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:14:58.527263 | orchestrator | 2026-01-01 03:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:01.582328 | orchestrator | 2026-01-01 03:15:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:01.583829 | orchestrator | 2026-01-01 03:15:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:01.583881 | orchestrator | 2026-01-01 03:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:04.635317 | orchestrator | 2026-01-01 03:15:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:04.637717 | orchestrator | 2026-01-01 03:15:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:04.637891 | orchestrator | 2026-01-01 03:15:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:07.695171 | orchestrator | 2026-01-01 03:15:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:07.695278 | orchestrator | 2026-01-01 03:15:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:07.695294 | orchestrator | 2026-01-01 03:15:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:10.743704 | orchestrator | 2026-01-01 03:15:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:10.745873 | orchestrator | 2026-01-01 03:15:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:10.745929 | orchestrator | 2026-01-01 03:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:13.791793 | orchestrator | 2026-01-01 03:15:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:13.794014 | orchestrator | 2026-01-01 03:15:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:13.794122 | orchestrator | 2026-01-01 03:15:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:16.837960 | orchestrator | 2026-01-01 03:15:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:16.839842 | orchestrator | 2026-01-01 03:15:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:16.839891 | orchestrator | 2026-01-01 03:15:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:19.896826 | orchestrator | 2026-01-01 03:15:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:19.898806 | orchestrator | 2026-01-01 03:15:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:19.898844 | orchestrator | 2026-01-01 03:15:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:22.946291 | orchestrator | 2026-01-01 03:15:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:22.948698 | orchestrator | 2026-01-01 03:15:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:22.948808 | orchestrator | 2026-01-01 03:15:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:26.001995 | orchestrator | 2026-01-01 03:15:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:26.006343 | orchestrator | 2026-01-01 03:15:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:26.006463 | orchestrator | 2026-01-01 03:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:29.053370 | orchestrator | 2026-01-01 03:15:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:29.054979 | orchestrator | 2026-01-01 03:15:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:29.055159 | orchestrator | 2026-01-01 03:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:32.100559 | orchestrator | 2026-01-01 03:15:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:32.101803 | orchestrator | 2026-01-01 03:15:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:32.101847 | orchestrator | 2026-01-01 03:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:35.152495 | orchestrator | 2026-01-01 03:15:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:35.154321 | orchestrator | 2026-01-01 03:15:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:35.154348 | orchestrator | 2026-01-01 03:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:38.208496 | orchestrator | 2026-01-01 03:15:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:38.210531 | orchestrator | 2026-01-01 03:15:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:38.210566 | orchestrator | 2026-01-01 03:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:41.269886 | orchestrator | 2026-01-01 03:15:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:41.271356 | orchestrator | 2026-01-01 03:15:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:41.271470 | orchestrator | 2026-01-01 03:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:44.316865 | orchestrator | 2026-01-01 03:15:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:44.317740 | orchestrator | 2026-01-01 03:15:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:44.317838 | orchestrator | 2026-01-01 03:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:47.357360 | orchestrator | 2026-01-01 03:15:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:47.361661 | orchestrator | 2026-01-01 03:15:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:47.361830 | orchestrator | 2026-01-01 03:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:50.410114 | orchestrator | 2026-01-01 03:15:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:50.411865 | orchestrator | 2026-01-01 03:15:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:50.411919 | orchestrator | 2026-01-01 03:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:53.460464 | orchestrator | 2026-01-01 03:15:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:53.460766 | orchestrator | 2026-01-01 03:15:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:53.460872 | orchestrator | 2026-01-01 03:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:56.510174 | orchestrator | 2026-01-01 03:15:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:56.510866 | orchestrator | 2026-01-01 03:15:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:56.510937 | orchestrator | 2026-01-01 03:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:59.568679 | orchestrator | 2026-01-01 03:15:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:15:59.571725 | orchestrator | 2026-01-01 03:15:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:15:59.571789 | orchestrator | 2026-01-01 03:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:02.628384 | orchestrator | 2026-01-01 03:16:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:02.631323 | orchestrator | 2026-01-01 03:16:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:02.631399 | orchestrator | 2026-01-01 03:16:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:05.681517 | orchestrator | 2026-01-01 03:16:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:05.684098 | orchestrator | 2026-01-01 03:16:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:05.684162 | orchestrator | 2026-01-01 03:16:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:08.738352 | orchestrator | 2026-01-01 03:16:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:08.739145 | orchestrator | 2026-01-01 03:16:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:08.739189 | orchestrator | 2026-01-01 03:16:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:11.796592 | orchestrator | 2026-01-01 03:16:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:11.800170 | orchestrator | 2026-01-01 03:16:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:11.800223 | orchestrator | 2026-01-01 03:16:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:14.850767 | orchestrator | 2026-01-01 03:16:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:14.851840 | orchestrator | 2026-01-01 03:16:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:14.851967 | orchestrator | 2026-01-01 03:16:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:17.899185 | orchestrator | 2026-01-01 03:16:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:17.900701 | orchestrator | 2026-01-01 03:16:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:17.900743 | orchestrator | 2026-01-01 03:16:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:20.948849 | orchestrator | 2026-01-01 03:16:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:20.949414 | orchestrator | 2026-01-01 03:16:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:20.949457 | orchestrator | 2026-01-01 03:16:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:24.001220 | orchestrator | 2026-01-01 03:16:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:24.003743 | orchestrator | 2026-01-01 03:16:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:24.003808 | orchestrator | 2026-01-01 03:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:27.059977 | orchestrator | 2026-01-01 03:16:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:27.061740 | orchestrator | 2026-01-01 03:16:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:27.061794 | orchestrator | 2026-01-01 03:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:30.114891 | orchestrator | 2026-01-01 03:16:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:30.114980 | orchestrator | 2026-01-01 03:16:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:30.114993 | orchestrator | 2026-01-01 03:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:33.158848 | orchestrator | 2026-01-01 03:16:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:33.159804 | orchestrator | 2026-01-01 03:16:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:33.160190 | orchestrator | 2026-01-01 03:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:36.216004 | orchestrator | 2026-01-01 03:16:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:36.219066 | orchestrator | 2026-01-01 03:16:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:36.219111 | orchestrator | 2026-01-01 03:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:39.272093 | orchestrator | 2026-01-01 03:16:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:39.273756 | orchestrator | 2026-01-01 03:16:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:39.273913 | orchestrator | 2026-01-01 03:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:42.316792 | orchestrator | 2026-01-01 03:16:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:42.319333 | orchestrator | 2026-01-01 03:16:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:42.319356 | orchestrator | 2026-01-01 03:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:45.374493 | orchestrator | 2026-01-01 03:16:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:45.377297 | orchestrator | 2026-01-01 03:16:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:45.377373 | orchestrator | 2026-01-01 03:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:48.439262 | orchestrator | 2026-01-01 03:16:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:48.440970 | orchestrator | 2026-01-01 03:16:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:48.441022 | orchestrator | 2026-01-01 03:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:51.501835 | orchestrator | 2026-01-01 03:16:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:51.503158 | orchestrator | 2026-01-01 03:16:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:51.503462 | orchestrator | 2026-01-01 03:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:54.543569 | orchestrator | 2026-01-01 03:16:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:54.544606 | orchestrator | 2026-01-01 03:16:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:54.544705 | orchestrator | 2026-01-01 03:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:57.594838 | orchestrator | 2026-01-01 03:16:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:16:57.595121 | orchestrator | 2026-01-01 03:16:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:16:57.595146 | orchestrator | 2026-01-01 03:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:00.646797 | orchestrator | 2026-01-01 03:17:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:00.648435 | orchestrator | 2026-01-01 03:17:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:00.648478 | orchestrator | 2026-01-01 03:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:03.693452 | orchestrator | 2026-01-01 03:17:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:03.696208 | orchestrator | 2026-01-01 03:17:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:03.696288 | orchestrator | 2026-01-01 03:17:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:06.748710 | orchestrator | 2026-01-01 03:17:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:06.750542 | orchestrator | 2026-01-01 03:17:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:06.750604 | orchestrator | 2026-01-01 03:17:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:09.801473 | orchestrator | 2026-01-01 03:17:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:09.803256 | orchestrator | 2026-01-01 03:17:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:09.803737 | orchestrator | 2026-01-01 03:17:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:12.855153 | orchestrator | 2026-01-01 03:17:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:12.856359 | orchestrator | 2026-01-01 03:17:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:12.856456 | orchestrator | 2026-01-01 03:17:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:15.894962 | orchestrator | 2026-01-01 03:17:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:15.895215 | orchestrator | 2026-01-01 03:17:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:15.895249 | orchestrator | 2026-01-01 03:17:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:18.936017 | orchestrator | 2026-01-01 03:17:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:18.937344 | orchestrator | 2026-01-01 03:17:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:18.937398 | orchestrator | 2026-01-01 03:17:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:21.985940 | orchestrator | 2026-01-01 03:17:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:21.987375 | orchestrator | 2026-01-01 03:17:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:21.987423 | orchestrator | 2026-01-01 03:17:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:25.040628 | orchestrator | 2026-01-01 03:17:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:25.041592 | orchestrator | 2026-01-01 03:17:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:25.041660 | orchestrator | 2026-01-01 03:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:28.086777 | orchestrator | 2026-01-01 03:17:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:28.088584 | orchestrator | 2026-01-01 03:17:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:28.088632 | orchestrator | 2026-01-01 03:17:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:31.135743 | orchestrator | 2026-01-01 03:17:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:31.137682 | orchestrator | 2026-01-01 03:17:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:31.137759 | orchestrator | 2026-01-01 03:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:34.178263 | orchestrator | 2026-01-01 03:17:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:34.180637 | orchestrator | 2026-01-01 03:17:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:34.180803 | orchestrator | 2026-01-01 03:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:37.227503 | orchestrator | 2026-01-01 03:17:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:37.227615 | orchestrator | 2026-01-01 03:17:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:37.227794 | orchestrator | 2026-01-01 03:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:40.273015 | orchestrator | 2026-01-01 03:17:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:40.275896 | orchestrator | 2026-01-01 03:17:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:40.275996 | orchestrator | 2026-01-01 03:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:43.324964 | orchestrator | 2026-01-01 03:17:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:43.325739 | orchestrator | 2026-01-01 03:17:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:43.325766 | orchestrator | 2026-01-01 03:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:46.373276 | orchestrator | 2026-01-01 03:17:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:46.374910 | orchestrator | 2026-01-01 03:17:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:46.375042 | orchestrator | 2026-01-01 03:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:49.429441 | orchestrator | 2026-01-01 03:17:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:49.431641 | orchestrator | 2026-01-01 03:17:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:49.431816 | orchestrator | 2026-01-01 03:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:52.480691 | orchestrator | 2026-01-01 03:17:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:52.481865 | orchestrator | 2026-01-01 03:17:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:52.481894 | orchestrator | 2026-01-01 03:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:55.537329 | orchestrator | 2026-01-01 03:17:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:55.539346 | orchestrator | 2026-01-01 03:17:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:55.539410 | orchestrator | 2026-01-01 03:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:58.592641 | orchestrator | 2026-01-01 03:17:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:17:58.594877 | orchestrator | 2026-01-01 03:17:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:17:58.594934 | orchestrator | 2026-01-01 03:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:01.643053 | orchestrator | 2026-01-01 03:18:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:01.644273 | orchestrator | 2026-01-01 03:18:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:01.644554 | orchestrator | 2026-01-01 03:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:04.694528 | orchestrator | 2026-01-01 03:18:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:04.696324 | orchestrator | 2026-01-01 03:18:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:04.696410 | orchestrator | 2026-01-01 03:18:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:07.739773 | orchestrator | 2026-01-01 03:18:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:07.740009 | orchestrator | 2026-01-01 03:18:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:07.740028 | orchestrator | 2026-01-01 03:18:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:10.790902 | orchestrator | 2026-01-01 03:18:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:10.792064 | orchestrator | 2026-01-01 03:18:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:10.792435 | orchestrator | 2026-01-01 03:18:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:13.850002 | orchestrator | 2026-01-01 03:18:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:13.852895 | orchestrator | 2026-01-01 03:18:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:13.852983 | orchestrator | 2026-01-01 03:18:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:16.898367 | orchestrator | 2026-01-01 03:18:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:16.899656 | orchestrator | 2026-01-01 03:18:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:16.899802 | orchestrator | 2026-01-01 03:18:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:19.943330 | orchestrator | 2026-01-01 03:18:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:19.944283 | orchestrator | 2026-01-01 03:18:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:19.944323 | orchestrator | 2026-01-01 03:18:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:22.994209 | orchestrator | 2026-01-01 03:18:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:22.997118 | orchestrator | 2026-01-01 03:18:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:22.997196 | orchestrator | 2026-01-01 03:18:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:26.037555 | orchestrator | 2026-01-01 03:18:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:26.039281 | orchestrator | 2026-01-01 03:18:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:26.039398 | orchestrator | 2026-01-01 03:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:29.089802 | orchestrator | 2026-01-01 03:18:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:29.092693 | orchestrator | 2026-01-01 03:18:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:29.092762 | orchestrator | 2026-01-01 03:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:32.143013 | orchestrator | 2026-01-01 03:18:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:32.144034 | orchestrator | 2026-01-01 03:18:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:32.144070 | orchestrator | 2026-01-01 03:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:35.194620 | orchestrator | 2026-01-01 03:18:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:35.195750 | orchestrator | 2026-01-01 03:18:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:35.195767 | orchestrator | 2026-01-01 03:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:38.246156 | orchestrator | 2026-01-01 03:18:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:38.247915 | orchestrator | 2026-01-01 03:18:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:38.247957 | orchestrator | 2026-01-01 03:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:41.299669 | orchestrator | 2026-01-01 03:18:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:41.301300 | orchestrator | 2026-01-01 03:18:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:41.301369 | orchestrator | 2026-01-01 03:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:44.357265 | orchestrator | 2026-01-01 03:18:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:44.359120 | orchestrator | 2026-01-01 03:18:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:44.359188 | orchestrator | 2026-01-01 03:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:47.400340 | orchestrator | 2026-01-01 03:18:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:47.400841 | orchestrator | 2026-01-01 03:18:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:47.400869 | orchestrator | 2026-01-01 03:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:50.459282 | orchestrator | 2026-01-01 03:18:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:50.459994 | orchestrator | 2026-01-01 03:18:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:50.460035 | orchestrator | 2026-01-01 03:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:53.511867 | orchestrator | 2026-01-01 03:18:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:53.513489 | orchestrator | 2026-01-01 03:18:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:53.513537 | orchestrator | 2026-01-01 03:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:56.557932 | orchestrator | 2026-01-01 03:18:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:56.562425 | orchestrator | 2026-01-01 03:18:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:56.562512 | orchestrator | 2026-01-01 03:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:59.612590 | orchestrator | 2026-01-01 03:18:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:18:59.614145 | orchestrator | 2026-01-01 03:18:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:18:59.614246 | orchestrator | 2026-01-01 03:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:02.666120 | orchestrator | 2026-01-01 03:19:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:02.668665 | orchestrator | 2026-01-01 03:19:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:02.668814 | orchestrator | 2026-01-01 03:19:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:05.718386 | orchestrator | 2026-01-01 03:19:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:05.718571 | orchestrator | 2026-01-01 03:19:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:05.718593 | orchestrator | 2026-01-01 03:19:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:08.761973 | orchestrator | 2026-01-01 03:19:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:08.764866 | orchestrator | 2026-01-01 03:19:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:08.764948 | orchestrator | 2026-01-01 03:19:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:11.806222 | orchestrator | 2026-01-01 03:19:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:11.806787 | orchestrator | 2026-01-01 03:19:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:11.806831 | orchestrator | 2026-01-01 03:19:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:14.862367 | orchestrator | 2026-01-01 03:19:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:14.864321 | orchestrator | 2026-01-01 03:19:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:14.864506 | orchestrator | 2026-01-01 03:19:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:17.910624 | orchestrator | 2026-01-01 03:19:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:17.912176 | orchestrator | 2026-01-01 03:19:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:17.912231 | orchestrator | 2026-01-01 03:19:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:20.949171 | orchestrator | 2026-01-01 03:19:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:20.949790 | orchestrator | 2026-01-01 03:19:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:20.949861 | orchestrator | 2026-01-01 03:19:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:23.995217 | orchestrator | 2026-01-01 03:19:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:23.995305 | orchestrator | 2026-01-01 03:19:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:23.995315 | orchestrator | 2026-01-01 03:19:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:27.046822 | orchestrator | 2026-01-01 03:19:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:27.048197 | orchestrator | 2026-01-01 03:19:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:27.048425 | orchestrator | 2026-01-01 03:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:30.091584 | orchestrator | 2026-01-01 03:19:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:30.092010 | orchestrator | 2026-01-01 03:19:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:30.092363 | orchestrator | 2026-01-01 03:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:33.131688 | orchestrator | 2026-01-01 03:19:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:33.133162 | orchestrator | 2026-01-01 03:19:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:33.133220 | orchestrator | 2026-01-01 03:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:36.171151 | orchestrator | 2026-01-01 03:19:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:36.171559 | orchestrator | 2026-01-01 03:19:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:36.171606 | orchestrator | 2026-01-01 03:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:39.223773 | orchestrator | 2026-01-01 03:19:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:39.224429 | orchestrator | 2026-01-01 03:19:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:39.224453 | orchestrator | 2026-01-01 03:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:42.277416 | orchestrator | 2026-01-01 03:19:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:42.277717 | orchestrator | 2026-01-01 03:19:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:42.277786 | orchestrator | 2026-01-01 03:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:45.331622 | orchestrator | 2026-01-01 03:19:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:45.333809 | orchestrator | 2026-01-01 03:19:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:45.333861 | orchestrator | 2026-01-01 03:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:48.385280 | orchestrator | 2026-01-01 03:19:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:48.386359 | orchestrator | 2026-01-01 03:19:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:48.386397 | orchestrator | 2026-01-01 03:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:51.437501 | orchestrator | 2026-01-01 03:19:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:51.438288 | orchestrator | 2026-01-01 03:19:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:51.438331 | orchestrator | 2026-01-01 03:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:54.494943 | orchestrator | 2026-01-01 03:19:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:54.495823 | orchestrator | 2026-01-01 03:19:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:54.495850 | orchestrator | 2026-01-01 03:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:57.546876 | orchestrator | 2026-01-01 03:19:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:19:57.550335 | orchestrator | 2026-01-01 03:19:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:19:57.550403 | orchestrator | 2026-01-01 03:19:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:00.591129 | orchestrator | 2026-01-01 03:20:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:00.591358 | orchestrator | 2026-01-01 03:20:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:00.591421 | orchestrator | 2026-01-01 03:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:03.643399 | orchestrator | 2026-01-01 03:20:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:03.646721 | orchestrator | 2026-01-01 03:20:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:03.646803 | orchestrator | 2026-01-01 03:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:06.694872 | orchestrator | 2026-01-01 03:20:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:06.697281 | orchestrator | 2026-01-01 03:20:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:06.697333 | orchestrator | 2026-01-01 03:20:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:09.749351 | orchestrator | 2026-01-01 03:20:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:09.751406 | orchestrator | 2026-01-01 03:20:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:09.751474 | orchestrator | 2026-01-01 03:20:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:12.800827 | orchestrator | 2026-01-01 03:20:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:12.802175 | orchestrator | 2026-01-01 03:20:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:12.802208 | orchestrator | 2026-01-01 03:20:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:15.848064 | orchestrator | 2026-01-01 03:20:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:15.849899 | orchestrator | 2026-01-01 03:20:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:15.849973 | orchestrator | 2026-01-01 03:20:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:18.901367 | orchestrator | 2026-01-01 03:20:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:18.901509 | orchestrator | 2026-01-01 03:20:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:18.901541 | orchestrator | 2026-01-01 03:20:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:21.955257 | orchestrator | 2026-01-01 03:20:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:21.957050 | orchestrator | 2026-01-01 03:20:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:21.957154 | orchestrator | 2026-01-01 03:20:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:25.010144 | orchestrator | 2026-01-01 03:20:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:25.011662 | orchestrator | 2026-01-01 03:20:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:25.011718 | orchestrator | 2026-01-01 03:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:28.053022 | orchestrator | 2026-01-01 03:20:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:28.054554 | orchestrator | 2026-01-01 03:20:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:28.054597 | orchestrator | 2026-01-01 03:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:31.101556 | orchestrator | 2026-01-01 03:20:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:31.103975 | orchestrator | 2026-01-01 03:20:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:31.104042 | orchestrator | 2026-01-01 03:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:34.145617 | orchestrator | 2026-01-01 03:20:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:34.145845 | orchestrator | 2026-01-01 03:20:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:34.145869 | orchestrator | 2026-01-01 03:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:37.191839 | orchestrator | 2026-01-01 03:20:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:37.193730 | orchestrator | 2026-01-01 03:20:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:37.193821 | orchestrator | 2026-01-01 03:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:40.236996 | orchestrator | 2026-01-01 03:20:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:40.238235 | orchestrator | 2026-01-01 03:20:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:40.238307 | orchestrator | 2026-01-01 03:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:43.290158 | orchestrator | 2026-01-01 03:20:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:43.291020 | orchestrator | 2026-01-01 03:20:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:43.291057 | orchestrator | 2026-01-01 03:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:46.338230 | orchestrator | 2026-01-01 03:20:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:46.338349 | orchestrator | 2026-01-01 03:20:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:46.338366 | orchestrator | 2026-01-01 03:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:49.392324 | orchestrator | 2026-01-01 03:20:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:49.392419 | orchestrator | 2026-01-01 03:20:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:49.392432 | orchestrator | 2026-01-01 03:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:52.442420 | orchestrator | 2026-01-01 03:20:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:52.443512 | orchestrator | 2026-01-01 03:20:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:52.443553 | orchestrator | 2026-01-01 03:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:55.492742 | orchestrator | 2026-01-01 03:20:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:55.494155 | orchestrator | 2026-01-01 03:20:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:55.494219 | orchestrator | 2026-01-01 03:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:58.540570 | orchestrator | 2026-01-01 03:20:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:20:58.542111 | orchestrator | 2026-01-01 03:20:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:20:58.542157 | orchestrator | 2026-01-01 03:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:01.595100 | orchestrator | 2026-01-01 03:21:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:01.598355 | orchestrator | 2026-01-01 03:21:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:01.598413 | orchestrator | 2026-01-01 03:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:04.659191 | orchestrator | 2026-01-01 03:21:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:04.659657 | orchestrator | 2026-01-01 03:21:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:04.659998 | orchestrator | 2026-01-01 03:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:07.711156 | orchestrator | 2026-01-01 03:21:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:07.712735 | orchestrator | 2026-01-01 03:21:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:07.712808 | orchestrator | 2026-01-01 03:21:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:10.763624 | orchestrator | 2026-01-01 03:21:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:10.765193 | orchestrator | 2026-01-01 03:21:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:10.765241 | orchestrator | 2026-01-01 03:21:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:13.808587 | orchestrator | 2026-01-01 03:21:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:13.811755 | orchestrator | 2026-01-01 03:21:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:13.811904 | orchestrator | 2026-01-01 03:21:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:16.862089 | orchestrator | 2026-01-01 03:21:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:16.863709 | orchestrator | 2026-01-01 03:21:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:16.863749 | orchestrator | 2026-01-01 03:21:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:19.913330 | orchestrator | 2026-01-01 03:21:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:19.913714 | orchestrator | 2026-01-01 03:21:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:19.913925 | orchestrator | 2026-01-01 03:21:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:22.962148 | orchestrator | 2026-01-01 03:21:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:22.962734 | orchestrator | 2026-01-01 03:21:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:22.962777 | orchestrator | 2026-01-01 03:21:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:26.000710 | orchestrator | 2026-01-01 03:21:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:26.001200 | orchestrator | 2026-01-01 03:21:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:26.001233 | orchestrator | 2026-01-01 03:21:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:29.053379 | orchestrator | 2026-01-01 03:21:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:29.054970 | orchestrator | 2026-01-01 03:21:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:29.055016 | orchestrator | 2026-01-01 03:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:32.105797 | orchestrator | 2026-01-01 03:21:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:32.107798 | orchestrator | 2026-01-01 03:21:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:32.107882 | orchestrator | 2026-01-01 03:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:35.165575 | orchestrator | 2026-01-01 03:21:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:35.167963 | orchestrator | 2026-01-01 03:21:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:35.168031 | orchestrator | 2026-01-01 03:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:38.217490 | orchestrator | 2026-01-01 03:21:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:38.219045 | orchestrator | 2026-01-01 03:21:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:38.219101 | orchestrator | 2026-01-01 03:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:41.266548 | orchestrator | 2026-01-01 03:21:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:41.268304 | orchestrator | 2026-01-01 03:21:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:41.268354 | orchestrator | 2026-01-01 03:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:44.328019 | orchestrator | 2026-01-01 03:21:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:44.328109 | orchestrator | 2026-01-01 03:21:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:44.328126 | orchestrator | 2026-01-01 03:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:47.374498 | orchestrator | 2026-01-01 03:21:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:47.375924 | orchestrator | 2026-01-01 03:21:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:47.375991 | orchestrator | 2026-01-01 03:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:50.428445 | orchestrator | 2026-01-01 03:21:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:50.429480 | orchestrator | 2026-01-01 03:21:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:50.429528 | orchestrator | 2026-01-01 03:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:53.482556 | orchestrator | 2026-01-01 03:21:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:53.488029 | orchestrator | 2026-01-01 03:21:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:53.488313 | orchestrator | 2026-01-01 03:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:56.531981 | orchestrator | 2026-01-01 03:21:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:56.533332 | orchestrator | 2026-01-01 03:21:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:56.533363 | orchestrator | 2026-01-01 03:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:59.585696 | orchestrator | 2026-01-01 03:21:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:21:59.587726 | orchestrator | 2026-01-01 03:21:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:21:59.587940 | orchestrator | 2026-01-01 03:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:02.628293 | orchestrator | 2026-01-01 03:22:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:02.629108 | orchestrator | 2026-01-01 03:22:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:02.629305 | orchestrator | 2026-01-01 03:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:05.679987 | orchestrator | 2026-01-01 03:22:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:05.680733 | orchestrator | 2026-01-01 03:22:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:05.681085 | orchestrator | 2026-01-01 03:22:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:08.728730 | orchestrator | 2026-01-01 03:22:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:08.730211 | orchestrator | 2026-01-01 03:22:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:08.730276 | orchestrator | 2026-01-01 03:22:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:11.780937 | orchestrator | 2026-01-01 03:22:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:11.782309 | orchestrator | 2026-01-01 03:22:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:11.782346 | orchestrator | 2026-01-01 03:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:14.832579 | orchestrator | 2026-01-01 03:22:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:14.834277 | orchestrator | 2026-01-01 03:22:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:14.834321 | orchestrator | 2026-01-01 03:22:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:17.881148 | orchestrator | 2026-01-01 03:22:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:17.883286 | orchestrator | 2026-01-01 03:22:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:17.883392 | orchestrator | 2026-01-01 03:22:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:20.931072 | orchestrator | 2026-01-01 03:22:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:20.931723 | orchestrator | 2026-01-01 03:22:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:20.932085 | orchestrator | 2026-01-01 03:22:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:23.977042 | orchestrator | 2026-01-01 03:22:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:23.978483 | orchestrator | 2026-01-01 03:22:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:23.978521 | orchestrator | 2026-01-01 03:22:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:27.017486 | orchestrator | 2026-01-01 03:22:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:27.019979 | orchestrator | 2026-01-01 03:22:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:27.020034 | orchestrator | 2026-01-01 03:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:30.070201 | orchestrator | 2026-01-01 03:22:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:30.070279 | orchestrator | 2026-01-01 03:22:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:30.070288 | orchestrator | 2026-01-01 03:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:33.114488 | orchestrator | 2026-01-01 03:22:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:33.115796 | orchestrator | 2026-01-01 03:22:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:33.115886 | orchestrator | 2026-01-01 03:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:36.163031 | orchestrator | 2026-01-01 03:22:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:36.164729 | orchestrator | 2026-01-01 03:22:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:36.164905 | orchestrator | 2026-01-01 03:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:39.217898 | orchestrator | 2026-01-01 03:22:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:39.219408 | orchestrator | 2026-01-01 03:22:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:39.219455 | orchestrator | 2026-01-01 03:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:42.271327 | orchestrator | 2026-01-01 03:22:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:42.272630 | orchestrator | 2026-01-01 03:22:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:42.272725 | orchestrator | 2026-01-01 03:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:45.317619 | orchestrator | 2026-01-01 03:22:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:45.318391 | orchestrator | 2026-01-01 03:22:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:45.318456 | orchestrator | 2026-01-01 03:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:48.375491 | orchestrator | 2026-01-01 03:22:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:48.377054 | orchestrator | 2026-01-01 03:22:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:48.377216 | orchestrator | 2026-01-01 03:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:51.432973 | orchestrator | 2026-01-01 03:22:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:51.434536 | orchestrator | 2026-01-01 03:22:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:51.434579 | orchestrator | 2026-01-01 03:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:54.487539 | orchestrator | 2026-01-01 03:22:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:54.488994 | orchestrator | 2026-01-01 03:22:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:54.489027 | orchestrator | 2026-01-01 03:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:57.527766 | orchestrator | 2026-01-01 03:22:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:22:57.529215 | orchestrator | 2026-01-01 03:22:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:22:57.529319 | orchestrator | 2026-01-01 03:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:00.569839 | orchestrator | 2026-01-01 03:23:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:00.571637 | orchestrator | 2026-01-01 03:23:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:00.571689 | orchestrator | 2026-01-01 03:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:03.624653 | orchestrator | 2026-01-01 03:23:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:03.626402 | orchestrator | 2026-01-01 03:23:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:03.626451 | orchestrator | 2026-01-01 03:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:06.678833 | orchestrator | 2026-01-01 03:23:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:06.681101 | orchestrator | 2026-01-01 03:23:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:06.681159 | orchestrator | 2026-01-01 03:23:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:09.740702 | orchestrator | 2026-01-01 03:23:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:09.742488 | orchestrator | 2026-01-01 03:23:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:09.742545 | orchestrator | 2026-01-01 03:23:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:12.779508 | orchestrator | 2026-01-01 03:23:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:12.780402 | orchestrator | 2026-01-01 03:23:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:12.780452 | orchestrator | 2026-01-01 03:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:15.819529 | orchestrator | 2026-01-01 03:23:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:15.820392 | orchestrator | 2026-01-01 03:23:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:15.820428 | orchestrator | 2026-01-01 03:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:18.862362 | orchestrator | 2026-01-01 03:23:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:18.863030 | orchestrator | 2026-01-01 03:23:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:18.863075 | orchestrator | 2026-01-01 03:23:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:21.903242 | orchestrator | 2026-01-01 03:23:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:21.904537 | orchestrator | 2026-01-01 03:23:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:21.904581 | orchestrator | 2026-01-01 03:23:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:24.949354 | orchestrator | 2026-01-01 03:23:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:24.949748 | orchestrator | 2026-01-01 03:23:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:24.949788 | orchestrator | 2026-01-01 03:23:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:28.006314 | orchestrator | 2026-01-01 03:23:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:28.007293 | orchestrator | 2026-01-01 03:23:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:28.007333 | orchestrator | 2026-01-01 03:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:31.054578 | orchestrator | 2026-01-01 03:23:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:31.056469 | orchestrator | 2026-01-01 03:23:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:31.056542 | orchestrator | 2026-01-01 03:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:34.097740 | orchestrator | 2026-01-01 03:23:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:34.100191 | orchestrator | 2026-01-01 03:23:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:34.100297 | orchestrator | 2026-01-01 03:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:37.140296 | orchestrator | 2026-01-01 03:23:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:37.141726 | orchestrator | 2026-01-01 03:23:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:37.141857 | orchestrator | 2026-01-01 03:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:40.193053 | orchestrator | 2026-01-01 03:23:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:40.194926 | orchestrator | 2026-01-01 03:23:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:40.194981 | orchestrator | 2026-01-01 03:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:43.240702 | orchestrator | 2026-01-01 03:23:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:43.242000 | orchestrator | 2026-01-01 03:23:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:43.242081 | orchestrator | 2026-01-01 03:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:46.288688 | orchestrator | 2026-01-01 03:23:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:46.291569 | orchestrator | 2026-01-01 03:23:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:46.291627 | orchestrator | 2026-01-01 03:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:49.331970 | orchestrator | 2026-01-01 03:23:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:49.333832 | orchestrator | 2026-01-01 03:23:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:49.333865 | orchestrator | 2026-01-01 03:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:52.380225 | orchestrator | 2026-01-01 03:23:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:52.381915 | orchestrator | 2026-01-01 03:23:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:52.381966 | orchestrator | 2026-01-01 03:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:55.431629 | orchestrator | 2026-01-01 03:23:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:55.434828 | orchestrator | 2026-01-01 03:23:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:55.434953 | orchestrator | 2026-01-01 03:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:58.489581 | orchestrator | 2026-01-01 03:23:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:23:58.491324 | orchestrator | 2026-01-01 03:23:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:23:58.491493 | orchestrator | 2026-01-01 03:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:01.539942 | orchestrator | 2026-01-01 03:24:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:01.542731 | orchestrator | 2026-01-01 03:24:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:01.542835 | orchestrator | 2026-01-01 03:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:04.590908 | orchestrator | 2026-01-01 03:24:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:04.591935 | orchestrator | 2026-01-01 03:24:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:04.592004 | orchestrator | 2026-01-01 03:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:07.641667 | orchestrator | 2026-01-01 03:24:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:07.643216 | orchestrator | 2026-01-01 03:24:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:07.643277 | orchestrator | 2026-01-01 03:24:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:10.698331 | orchestrator | 2026-01-01 03:24:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:10.700243 | orchestrator | 2026-01-01 03:24:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:10.700295 | orchestrator | 2026-01-01 03:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:13.743685 | orchestrator | 2026-01-01 03:24:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:13.744240 | orchestrator | 2026-01-01 03:24:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:13.744479 | orchestrator | 2026-01-01 03:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:16.789088 | orchestrator | 2026-01-01 03:24:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:16.791367 | orchestrator | 2026-01-01 03:24:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:16.791479 | orchestrator | 2026-01-01 03:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:19.842266 | orchestrator | 2026-01-01 03:24:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:19.848532 | orchestrator | 2026-01-01 03:24:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:19.848590 | orchestrator | 2026-01-01 03:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:22.887225 | orchestrator | 2026-01-01 03:24:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:22.887992 | orchestrator | 2026-01-01 03:24:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:22.888009 | orchestrator | 2026-01-01 03:24:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:25.935220 | orchestrator | 2026-01-01 03:24:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:25.937418 | orchestrator | 2026-01-01 03:24:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:25.937447 | orchestrator | 2026-01-01 03:24:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:28.980146 | orchestrator | 2026-01-01 03:24:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:28.980722 | orchestrator | 2026-01-01 03:24:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:28.981150 | orchestrator | 2026-01-01 03:24:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:32.026805 | orchestrator | 2026-01-01 03:24:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:32.028828 | orchestrator | 2026-01-01 03:24:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:32.028874 | orchestrator | 2026-01-01 03:24:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:35.073023 | orchestrator | 2026-01-01 03:24:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:35.074763 | orchestrator | 2026-01-01 03:24:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:35.074827 | orchestrator | 2026-01-01 03:24:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:38.115242 | orchestrator | 2026-01-01 03:24:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:38.116976 | orchestrator | 2026-01-01 03:24:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:38.117234 | orchestrator | 2026-01-01 03:24:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:41.169606 | orchestrator | 2026-01-01 03:24:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:41.170213 | orchestrator | 2026-01-01 03:24:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:41.170249 | orchestrator | 2026-01-01 03:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:44.220051 | orchestrator | 2026-01-01 03:24:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:44.223076 | orchestrator | 2026-01-01 03:24:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:44.223151 | orchestrator | 2026-01-01 03:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:47.263432 | orchestrator | 2026-01-01 03:24:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:47.265684 | orchestrator | 2026-01-01 03:24:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:47.265775 | orchestrator | 2026-01-01 03:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:50.307144 | orchestrator | 2026-01-01 03:24:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:50.308472 | orchestrator | 2026-01-01 03:24:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:50.308593 | orchestrator | 2026-01-01 03:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:53.349497 | orchestrator | 2026-01-01 03:24:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:53.353077 | orchestrator | 2026-01-01 03:24:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:53.353214 | orchestrator | 2026-01-01 03:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:56.401560 | orchestrator | 2026-01-01 03:24:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:56.403205 | orchestrator | 2026-01-01 03:24:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:56.403251 | orchestrator | 2026-01-01 03:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:59.446729 | orchestrator | 2026-01-01 03:24:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:24:59.449237 | orchestrator | 2026-01-01 03:24:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:24:59.449297 | orchestrator | 2026-01-01 03:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:02.493171 | orchestrator | 2026-01-01 03:25:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:02.496618 | orchestrator | 2026-01-01 03:25:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:02.496685 | orchestrator | 2026-01-01 03:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:05.539547 | orchestrator | 2026-01-01 03:25:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:05.541988 | orchestrator | 2026-01-01 03:25:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:05.542119 | orchestrator | 2026-01-01 03:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:08.602952 | orchestrator | 2026-01-01 03:25:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:08.604947 | orchestrator | 2026-01-01 03:25:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:08.604994 | orchestrator | 2026-01-01 03:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:11.646667 | orchestrator | 2026-01-01 03:25:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:11.649347 | orchestrator | 2026-01-01 03:25:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:11.649407 | orchestrator | 2026-01-01 03:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:14.711209 | orchestrator | 2026-01-01 03:25:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:14.715610 | orchestrator | 2026-01-01 03:25:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:14.715703 | orchestrator | 2026-01-01 03:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:17.767750 | orchestrator | 2026-01-01 03:25:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:17.771284 | orchestrator | 2026-01-01 03:25:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:17.771375 | orchestrator | 2026-01-01 03:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:20.817494 | orchestrator | 2026-01-01 03:25:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:20.820437 | orchestrator | 2026-01-01 03:25:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:20.820498 | orchestrator | 2026-01-01 03:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:23.871624 | orchestrator | 2026-01-01 03:25:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:23.873234 | orchestrator | 2026-01-01 03:25:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:23.873297 | orchestrator | 2026-01-01 03:25:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:26.922729 | orchestrator | 2026-01-01 03:25:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:26.925150 | orchestrator | 2026-01-01 03:25:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:26.925214 | orchestrator | 2026-01-01 03:25:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:29.976953 | orchestrator | 2026-01-01 03:25:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:29.981726 | orchestrator | 2026-01-01 03:25:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:29.981905 | orchestrator | 2026-01-01 03:25:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:33.037051 | orchestrator | 2026-01-01 03:25:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:33.042163 | orchestrator | 2026-01-01 03:25:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:33.042256 | orchestrator | 2026-01-01 03:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:36.091452 | orchestrator | 2026-01-01 03:25:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:36.092986 | orchestrator | 2026-01-01 03:25:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:36.093025 | orchestrator | 2026-01-01 03:25:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:39.148031 | orchestrator | 2026-01-01 03:25:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:39.151727 | orchestrator | 2026-01-01 03:25:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:39.151809 | orchestrator | 2026-01-01 03:25:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:42.200790 | orchestrator | 2026-01-01 03:25:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:42.202746 | orchestrator | 2026-01-01 03:25:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:42.202809 | orchestrator | 2026-01-01 03:25:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:45.260997 | orchestrator | 2026-01-01 03:25:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:45.264435 | orchestrator | 2026-01-01 03:25:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:45.264497 | orchestrator | 2026-01-01 03:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:48.316171 | orchestrator | 2026-01-01 03:25:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:48.318625 | orchestrator | 2026-01-01 03:25:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:48.318726 | orchestrator | 2026-01-01 03:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:51.371015 | orchestrator | 2026-01-01 03:25:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:51.373779 | orchestrator | 2026-01-01 03:25:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:51.373866 | orchestrator | 2026-01-01 03:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:54.422213 | orchestrator | 2026-01-01 03:25:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:54.424827 | orchestrator | 2026-01-01 03:25:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:54.425327 | orchestrator | 2026-01-01 03:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:57.480925 | orchestrator | 2026-01-01 03:25:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:25:57.487306 | orchestrator | 2026-01-01 03:25:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:25:57.487587 | orchestrator | 2026-01-01 03:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:00.542264 | orchestrator | 2026-01-01 03:26:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:00.545428 | orchestrator | 2026-01-01 03:26:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:00.545466 | orchestrator | 2026-01-01 03:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:03.595762 | orchestrator | 2026-01-01 03:26:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:03.598147 | orchestrator | 2026-01-01 03:26:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:03.598238 | orchestrator | 2026-01-01 03:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:06.648722 | orchestrator | 2026-01-01 03:26:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:06.650472 | orchestrator | 2026-01-01 03:26:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:06.650528 | orchestrator | 2026-01-01 03:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:09.698010 | orchestrator | 2026-01-01 03:26:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:09.699705 | orchestrator | 2026-01-01 03:26:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:09.699737 | orchestrator | 2026-01-01 03:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:12.754305 | orchestrator | 2026-01-01 03:26:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:12.754534 | orchestrator | 2026-01-01 03:26:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:12.754559 | orchestrator | 2026-01-01 03:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:15.807293 | orchestrator | 2026-01-01 03:26:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:15.809236 | orchestrator | 2026-01-01 03:26:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:15.809304 | orchestrator | 2026-01-01 03:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:18.862393 | orchestrator | 2026-01-01 03:26:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:18.863989 | orchestrator | 2026-01-01 03:26:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:18.864034 | orchestrator | 2026-01-01 03:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:21.926271 | orchestrator | 2026-01-01 03:26:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:21.929247 | orchestrator | 2026-01-01 03:26:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:21.929299 | orchestrator | 2026-01-01 03:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:24.977048 | orchestrator | 2026-01-01 03:26:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:24.978970 | orchestrator | 2026-01-01 03:26:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:24.979055 | orchestrator | 2026-01-01 03:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:28.026629 | orchestrator | 2026-01-01 03:26:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:28.029683 | orchestrator | 2026-01-01 03:26:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:28.029731 | orchestrator | 2026-01-01 03:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:31.089446 | orchestrator | 2026-01-01 03:26:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:31.091289 | orchestrator | 2026-01-01 03:26:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:31.091343 | orchestrator | 2026-01-01 03:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:34.138353 | orchestrator | 2026-01-01 03:26:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:34.140596 | orchestrator | 2026-01-01 03:26:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:34.140709 | orchestrator | 2026-01-01 03:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:37.189566 | orchestrator | 2026-01-01 03:26:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:37.192439 | orchestrator | 2026-01-01 03:26:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:37.192497 | orchestrator | 2026-01-01 03:26:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:40.242387 | orchestrator | 2026-01-01 03:26:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:40.243844 | orchestrator | 2026-01-01 03:26:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:40.243898 | orchestrator | 2026-01-01 03:26:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:43.296831 | orchestrator | 2026-01-01 03:26:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:43.299102 | orchestrator | 2026-01-01 03:26:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:43.299152 | orchestrator | 2026-01-01 03:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:46.350082 | orchestrator | 2026-01-01 03:26:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:46.352472 | orchestrator | 2026-01-01 03:26:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:46.352561 | orchestrator | 2026-01-01 03:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:49.395347 | orchestrator | 2026-01-01 03:26:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:49.397844 | orchestrator | 2026-01-01 03:26:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:49.397883 | orchestrator | 2026-01-01 03:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:52.447032 | orchestrator | 2026-01-01 03:26:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:52.448754 | orchestrator | 2026-01-01 03:26:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:52.448803 | orchestrator | 2026-01-01 03:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:55.500305 | orchestrator | 2026-01-01 03:26:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:55.500786 | orchestrator | 2026-01-01 03:26:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:55.500902 | orchestrator | 2026-01-01 03:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:58.555091 | orchestrator | 2026-01-01 03:26:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:26:58.558258 | orchestrator | 2026-01-01 03:26:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:26:58.558363 | orchestrator | 2026-01-01 03:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:01.619253 | orchestrator | 2026-01-01 03:27:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:01.622468 | orchestrator | 2026-01-01 03:27:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:01.622535 | orchestrator | 2026-01-01 03:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:04.675093 | orchestrator | 2026-01-01 03:27:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:04.675902 | orchestrator | 2026-01-01 03:27:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:04.675940 | orchestrator | 2026-01-01 03:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:07.722841 | orchestrator | 2026-01-01 03:27:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:07.725737 | orchestrator | 2026-01-01 03:27:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:07.725822 | orchestrator | 2026-01-01 03:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:10.775180 | orchestrator | 2026-01-01 03:27:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:10.778000 | orchestrator | 2026-01-01 03:27:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:10.778121 | orchestrator | 2026-01-01 03:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:13.822858 | orchestrator | 2026-01-01 03:27:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:13.823625 | orchestrator | 2026-01-01 03:27:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:13.823653 | orchestrator | 2026-01-01 03:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:16.868388 | orchestrator | 2026-01-01 03:27:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:16.869172 | orchestrator | 2026-01-01 03:27:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:16.869212 | orchestrator | 2026-01-01 03:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:19.924268 | orchestrator | 2026-01-01 03:27:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:19.924390 | orchestrator | 2026-01-01 03:27:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:19.924407 | orchestrator | 2026-01-01 03:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:22.972457 | orchestrator | 2026-01-01 03:27:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:22.974449 | orchestrator | 2026-01-01 03:27:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:22.974501 | orchestrator | 2026-01-01 03:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:26.036043 | orchestrator | 2026-01-01 03:27:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:26.038802 | orchestrator | 2026-01-01 03:27:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:26.038871 | orchestrator | 2026-01-01 03:27:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:29.080125 | orchestrator | 2026-01-01 03:27:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:29.082613 | orchestrator | 2026-01-01 03:27:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:29.082695 | orchestrator | 2026-01-01 03:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:32.125522 | orchestrator | 2026-01-01 03:27:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:32.128537 | orchestrator | 2026-01-01 03:27:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:32.128619 | orchestrator | 2026-01-01 03:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:35.175891 | orchestrator | 2026-01-01 03:27:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:35.178575 | orchestrator | 2026-01-01 03:27:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:35.178772 | orchestrator | 2026-01-01 03:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:38.232900 | orchestrator | 2026-01-01 03:27:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:38.234953 | orchestrator | 2026-01-01 03:27:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:38.235073 | orchestrator | 2026-01-01 03:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:41.288405 | orchestrator | 2026-01-01 03:27:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:41.290160 | orchestrator | 2026-01-01 03:27:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:41.290213 | orchestrator | 2026-01-01 03:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:44.342105 | orchestrator | 2026-01-01 03:27:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:44.343478 | orchestrator | 2026-01-01 03:27:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:44.343528 | orchestrator | 2026-01-01 03:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:47.392672 | orchestrator | 2026-01-01 03:27:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:47.395845 | orchestrator | 2026-01-01 03:27:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:47.395894 | orchestrator | 2026-01-01 03:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:50.458972 | orchestrator | 2026-01-01 03:27:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:50.459077 | orchestrator | 2026-01-01 03:27:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:50.459089 | orchestrator | 2026-01-01 03:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:53.508128 | orchestrator | 2026-01-01 03:27:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:53.510257 | orchestrator | 2026-01-01 03:27:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:53.510317 | orchestrator | 2026-01-01 03:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:56.562580 | orchestrator | 2026-01-01 03:27:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:56.563340 | orchestrator | 2026-01-01 03:27:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:56.563376 | orchestrator | 2026-01-01 03:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:59.616443 | orchestrator | 2026-01-01 03:27:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:27:59.617974 | orchestrator | 2026-01-01 03:27:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:27:59.618108 | orchestrator | 2026-01-01 03:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:02.676439 | orchestrator | 2026-01-01 03:28:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:02.677572 | orchestrator | 2026-01-01 03:28:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:02.677678 | orchestrator | 2026-01-01 03:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:05.725166 | orchestrator | 2026-01-01 03:28:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:05.727721 | orchestrator | 2026-01-01 03:28:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:05.727954 | orchestrator | 2026-01-01 03:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:08.773373 | orchestrator | 2026-01-01 03:28:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:08.777785 | orchestrator | 2026-01-01 03:28:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:08.777856 | orchestrator | 2026-01-01 03:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:11.833513 | orchestrator | 2026-01-01 03:28:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:11.835720 | orchestrator | 2026-01-01 03:28:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:11.835761 | orchestrator | 2026-01-01 03:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:14.888637 | orchestrator | 2026-01-01 03:28:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:14.890283 | orchestrator | 2026-01-01 03:28:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:14.890368 | orchestrator | 2026-01-01 03:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:17.943411 | orchestrator | 2026-01-01 03:28:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:17.946001 | orchestrator | 2026-01-01 03:28:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:17.946217 | orchestrator | 2026-01-01 03:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:20.993985 | orchestrator | 2026-01-01 03:28:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:20.996261 | orchestrator | 2026-01-01 03:28:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:20.996379 | orchestrator | 2026-01-01 03:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:24.043906 | orchestrator | 2026-01-01 03:28:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:24.046585 | orchestrator | 2026-01-01 03:28:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:24.046633 | orchestrator | 2026-01-01 03:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:27.100202 | orchestrator | 2026-01-01 03:28:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:27.102492 | orchestrator | 2026-01-01 03:28:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:27.102538 | orchestrator | 2026-01-01 03:28:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:30.153947 | orchestrator | 2026-01-01 03:28:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:30.156220 | orchestrator | 2026-01-01 03:28:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:30.156301 | orchestrator | 2026-01-01 03:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:33.206624 | orchestrator | 2026-01-01 03:28:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:33.208534 | orchestrator | 2026-01-01 03:28:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:33.208734 | orchestrator | 2026-01-01 03:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:36.261160 | orchestrator | 2026-01-01 03:28:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:36.263782 | orchestrator | 2026-01-01 03:28:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:36.263849 | orchestrator | 2026-01-01 03:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:39.317021 | orchestrator | 2026-01-01 03:28:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:39.318284 | orchestrator | 2026-01-01 03:28:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:39.318337 | orchestrator | 2026-01-01 03:28:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:42.370210 | orchestrator | 2026-01-01 03:28:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:42.372688 | orchestrator | 2026-01-01 03:28:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:42.372735 | orchestrator | 2026-01-01 03:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:45.426997 | orchestrator | 2026-01-01 03:28:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:45.429076 | orchestrator | 2026-01-01 03:28:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:45.429130 | orchestrator | 2026-01-01 03:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:48.480984 | orchestrator | 2026-01-01 03:28:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:48.483653 | orchestrator | 2026-01-01 03:28:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:48.483712 | orchestrator | 2026-01-01 03:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:51.531891 | orchestrator | 2026-01-01 03:28:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:51.533077 | orchestrator | 2026-01-01 03:28:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:51.533098 | orchestrator | 2026-01-01 03:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:54.584575 | orchestrator | 2026-01-01 03:28:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:54.586332 | orchestrator | 2026-01-01 03:28:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:54.586428 | orchestrator | 2026-01-01 03:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:57.635219 | orchestrator | 2026-01-01 03:28:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:28:57.639593 | orchestrator | 2026-01-01 03:28:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:28:57.639695 | orchestrator | 2026-01-01 03:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:00.693665 | orchestrator | 2026-01-01 03:29:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:00.695934 | orchestrator | 2026-01-01 03:29:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:00.696091 | orchestrator | 2026-01-01 03:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:03.750640 | orchestrator | 2026-01-01 03:29:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:03.753174 | orchestrator | 2026-01-01 03:29:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:03.753555 | orchestrator | 2026-01-01 03:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:06.792658 | orchestrator | 2026-01-01 03:29:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:06.793736 | orchestrator | 2026-01-01 03:29:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:06.793793 | orchestrator | 2026-01-01 03:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:09.835111 | orchestrator | 2026-01-01 03:29:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:09.835840 | orchestrator | 2026-01-01 03:29:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:09.835877 | orchestrator | 2026-01-01 03:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:12.890697 | orchestrator | 2026-01-01 03:29:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:12.893132 | orchestrator | 2026-01-01 03:29:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:12.893169 | orchestrator | 2026-01-01 03:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:15.945318 | orchestrator | 2026-01-01 03:29:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:15.947261 | orchestrator | 2026-01-01 03:29:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:15.947316 | orchestrator | 2026-01-01 03:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:18.990876 | orchestrator | 2026-01-01 03:29:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:18.994054 | orchestrator | 2026-01-01 03:29:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:18.994097 | orchestrator | 2026-01-01 03:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:22.054613 | orchestrator | 2026-01-01 03:29:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:22.055402 | orchestrator | 2026-01-01 03:29:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:22.055430 | orchestrator | 2026-01-01 03:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:25.095109 | orchestrator | 2026-01-01 03:29:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:25.096824 | orchestrator | 2026-01-01 03:29:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:25.096862 | orchestrator | 2026-01-01 03:29:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:28.139921 | orchestrator | 2026-01-01 03:29:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:28.143577 | orchestrator | 2026-01-01 03:29:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:28.143643 | orchestrator | 2026-01-01 03:29:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:31.198632 | orchestrator | 2026-01-01 03:29:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:31.200418 | orchestrator | 2026-01-01 03:29:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:31.200998 | orchestrator | 2026-01-01 03:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:34.245893 | orchestrator | 2026-01-01 03:29:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:34.248782 | orchestrator | 2026-01-01 03:29:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:34.249411 | orchestrator | 2026-01-01 03:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:37.295326 | orchestrator | 2026-01-01 03:29:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:37.297080 | orchestrator | 2026-01-01 03:29:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:37.297175 | orchestrator | 2026-01-01 03:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:40.347025 | orchestrator | 2026-01-01 03:29:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:40.349540 | orchestrator | 2026-01-01 03:29:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:40.349603 | orchestrator | 2026-01-01 03:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:43.390369 | orchestrator | 2026-01-01 03:29:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:43.392916 | orchestrator | 2026-01-01 03:29:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:43.392994 | orchestrator | 2026-01-01 03:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:46.429430 | orchestrator | 2026-01-01 03:29:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:46.430918 | orchestrator | 2026-01-01 03:29:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:46.430986 | orchestrator | 2026-01-01 03:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:49.475428 | orchestrator | 2026-01-01 03:29:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:49.477182 | orchestrator | 2026-01-01 03:29:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:49.477336 | orchestrator | 2026-01-01 03:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:52.525660 | orchestrator | 2026-01-01 03:29:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:52.526372 | orchestrator | 2026-01-01 03:29:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:52.526408 | orchestrator | 2026-01-01 03:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:55.575773 | orchestrator | 2026-01-01 03:29:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:55.577162 | orchestrator | 2026-01-01 03:29:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:55.577192 | orchestrator | 2026-01-01 03:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:58.629379 | orchestrator | 2026-01-01 03:29:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:29:58.631541 | orchestrator | 2026-01-01 03:29:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:29:58.631599 | orchestrator | 2026-01-01 03:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:01.678587 | orchestrator | 2026-01-01 03:30:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:01.678911 | orchestrator | 2026-01-01 03:30:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:01.678942 | orchestrator | 2026-01-01 03:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:04.724650 | orchestrator | 2026-01-01 03:30:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:04.726657 | orchestrator | 2026-01-01 03:30:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:04.726714 | orchestrator | 2026-01-01 03:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:07.768701 | orchestrator | 2026-01-01 03:30:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:07.772010 | orchestrator | 2026-01-01 03:30:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:07.772585 | orchestrator | 2026-01-01 03:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:10.811275 | orchestrator | 2026-01-01 03:30:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:10.812284 | orchestrator | 2026-01-01 03:30:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:10.812334 | orchestrator | 2026-01-01 03:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:13.860804 | orchestrator | 2026-01-01 03:30:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:13.861990 | orchestrator | 2026-01-01 03:30:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:13.862203 | orchestrator | 2026-01-01 03:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:16.908193 | orchestrator | 2026-01-01 03:30:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:16.909183 | orchestrator | 2026-01-01 03:30:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:16.909215 | orchestrator | 2026-01-01 03:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:19.955523 | orchestrator | 2026-01-01 03:30:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:19.957267 | orchestrator | 2026-01-01 03:30:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:19.957302 | orchestrator | 2026-01-01 03:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:22.999980 | orchestrator | 2026-01-01 03:30:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:23.001813 | orchestrator | 2026-01-01 03:30:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:23.001851 | orchestrator | 2026-01-01 03:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:26.043359 | orchestrator | 2026-01-01 03:30:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:26.044499 | orchestrator | 2026-01-01 03:30:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:26.044570 | orchestrator | 2026-01-01 03:30:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:29.095372 | orchestrator | 2026-01-01 03:30:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:29.096274 | orchestrator | 2026-01-01 03:30:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:29.096333 | orchestrator | 2026-01-01 03:30:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:32.139535 | orchestrator | 2026-01-01 03:30:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:32.141563 | orchestrator | 2026-01-01 03:30:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:32.141587 | orchestrator | 2026-01-01 03:30:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:35.195116 | orchestrator | 2026-01-01 03:30:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:35.196922 | orchestrator | 2026-01-01 03:30:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:35.197126 | orchestrator | 2026-01-01 03:30:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:38.251267 | orchestrator | 2026-01-01 03:30:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:38.251842 | orchestrator | 2026-01-01 03:30:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:38.251881 | orchestrator | 2026-01-01 03:30:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:41.309265 | orchestrator | 2026-01-01 03:30:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:41.310982 | orchestrator | 2026-01-01 03:30:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:41.311147 | orchestrator | 2026-01-01 03:30:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:44.370610 | orchestrator | 2026-01-01 03:30:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:44.372493 | orchestrator | 2026-01-01 03:30:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:44.372539 | orchestrator | 2026-01-01 03:30:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:47.418871 | orchestrator | 2026-01-01 03:30:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:47.420610 | orchestrator | 2026-01-01 03:30:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:47.420656 | orchestrator | 2026-01-01 03:30:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:50.468424 | orchestrator | 2026-01-01 03:30:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:50.469607 | orchestrator | 2026-01-01 03:30:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:50.469620 | orchestrator | 2026-01-01 03:30:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:53.517520 | orchestrator | 2026-01-01 03:30:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:53.520042 | orchestrator | 2026-01-01 03:30:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:53.520089 | orchestrator | 2026-01-01 03:30:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:56.562322 | orchestrator | 2026-01-01 03:30:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:56.562645 | orchestrator | 2026-01-01 03:30:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:56.562659 | orchestrator | 2026-01-01 03:30:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:59.608441 | orchestrator | 2026-01-01 03:30:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:30:59.611970 | orchestrator | 2026-01-01 03:30:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:30:59.612177 | orchestrator | 2026-01-01 03:30:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:02.660601 | orchestrator | 2026-01-01 03:31:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:02.661863 | orchestrator | 2026-01-01 03:31:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:02.661900 | orchestrator | 2026-01-01 03:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:05.713715 | orchestrator | 2026-01-01 03:31:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:05.714293 | orchestrator | 2026-01-01 03:31:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:05.714332 | orchestrator | 2026-01-01 03:31:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:08.768305 | orchestrator | 2026-01-01 03:31:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:08.770585 | orchestrator | 2026-01-01 03:31:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:08.770645 | orchestrator | 2026-01-01 03:31:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:11.826466 | orchestrator | 2026-01-01 03:31:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:11.828896 | orchestrator | 2026-01-01 03:31:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:11.828960 | orchestrator | 2026-01-01 03:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:14.882864 | orchestrator | 2026-01-01 03:31:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:14.884044 | orchestrator | 2026-01-01 03:31:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:14.884075 | orchestrator | 2026-01-01 03:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:17.934697 | orchestrator | 2026-01-01 03:31:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:17.936091 | orchestrator | 2026-01-01 03:31:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:17.936131 | orchestrator | 2026-01-01 03:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:20.987799 | orchestrator | 2026-01-01 03:31:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:20.988982 | orchestrator | 2026-01-01 03:31:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:20.989014 | orchestrator | 2026-01-01 03:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:24.036690 | orchestrator | 2026-01-01 03:31:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:24.037865 | orchestrator | 2026-01-01 03:31:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:24.037942 | orchestrator | 2026-01-01 03:31:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:27.084691 | orchestrator | 2026-01-01 03:31:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:27.087663 | orchestrator | 2026-01-01 03:31:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:27.087719 | orchestrator | 2026-01-01 03:31:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:30.140989 | orchestrator | 2026-01-01 03:31:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:30.143013 | orchestrator | 2026-01-01 03:31:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:30.143053 | orchestrator | 2026-01-01 03:31:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:33.198053 | orchestrator | 2026-01-01 03:31:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:33.198412 | orchestrator | 2026-01-01 03:31:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:33.198432 | orchestrator | 2026-01-01 03:31:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:36.252310 | orchestrator | 2026-01-01 03:31:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:36.254157 | orchestrator | 2026-01-01 03:31:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:36.254204 | orchestrator | 2026-01-01 03:31:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:39.297203 | orchestrator | 2026-01-01 03:31:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:39.297494 | orchestrator | 2026-01-01 03:31:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:39.297522 | orchestrator | 2026-01-01 03:31:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:42.339866 | orchestrator | 2026-01-01 03:31:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:42.778332 | orchestrator | 2026-01-01 03:31:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:42.778454 | orchestrator | 2026-01-01 03:31:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:45.398577 | orchestrator | 2026-01-01 03:31:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:45.399988 | orchestrator | 2026-01-01 03:31:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:45.400031 | orchestrator | 2026-01-01 03:31:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:48.453112 | orchestrator | 2026-01-01 03:31:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:48.455519 | orchestrator | 2026-01-01 03:31:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:48.455647 | orchestrator | 2026-01-01 03:31:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:51.508690 | orchestrator | 2026-01-01 03:31:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:51.510674 | orchestrator | 2026-01-01 03:31:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:51.510862 | orchestrator | 2026-01-01 03:31:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:54.559811 | orchestrator | 2026-01-01 03:31:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:54.561780 | orchestrator | 2026-01-01 03:31:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:54.561863 | orchestrator | 2026-01-01 03:31:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:57.608068 | orchestrator | 2026-01-01 03:31:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:31:57.610221 | orchestrator | 2026-01-01 03:31:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:31:57.610294 | orchestrator | 2026-01-01 03:31:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:00.657227 | orchestrator | 2026-01-01 03:32:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:00.657957 | orchestrator | 2026-01-01 03:32:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:00.658003 | orchestrator | 2026-01-01 03:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:03.703853 | orchestrator | 2026-01-01 03:32:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:03.705260 | orchestrator | 2026-01-01 03:32:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:03.705294 | orchestrator | 2026-01-01 03:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:06.749769 | orchestrator | 2026-01-01 03:32:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:06.750788 | orchestrator | 2026-01-01 03:32:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:06.750830 | orchestrator | 2026-01-01 03:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:09.795468 | orchestrator | 2026-01-01 03:32:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:09.798009 | orchestrator | 2026-01-01 03:32:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:09.798057 | orchestrator | 2026-01-01 03:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:12.849856 | orchestrator | 2026-01-01 03:32:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:12.851889 | orchestrator | 2026-01-01 03:32:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:12.851971 | orchestrator | 2026-01-01 03:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:15.898607 | orchestrator | 2026-01-01 03:32:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:15.901852 | orchestrator | 2026-01-01 03:32:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:15.901978 | orchestrator | 2026-01-01 03:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:18.948693 | orchestrator | 2026-01-01 03:32:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:18.950707 | orchestrator | 2026-01-01 03:32:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:18.950723 | orchestrator | 2026-01-01 03:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:21.995901 | orchestrator | 2026-01-01 03:32:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:21.997436 | orchestrator | 2026-01-01 03:32:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:21.997450 | orchestrator | 2026-01-01 03:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:25.046507 | orchestrator | 2026-01-01 03:32:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:25.047620 | orchestrator | 2026-01-01 03:32:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:25.047925 | orchestrator | 2026-01-01 03:32:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:28.099962 | orchestrator | 2026-01-01 03:32:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:28.101584 | orchestrator | 2026-01-01 03:32:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:28.101626 | orchestrator | 2026-01-01 03:32:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:31.143424 | orchestrator | 2026-01-01 03:32:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:31.145055 | orchestrator | 2026-01-01 03:32:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:31.145100 | orchestrator | 2026-01-01 03:32:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:34.187457 | orchestrator | 2026-01-01 03:32:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:34.188874 | orchestrator | 2026-01-01 03:32:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:34.188976 | orchestrator | 2026-01-01 03:32:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:37.246616 | orchestrator | 2026-01-01 03:32:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:37.249536 | orchestrator | 2026-01-01 03:32:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:37.249573 | orchestrator | 2026-01-01 03:32:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:40.298219 | orchestrator | 2026-01-01 03:32:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:40.299553 | orchestrator | 2026-01-01 03:32:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:40.299590 | orchestrator | 2026-01-01 03:32:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:43.348441 | orchestrator | 2026-01-01 03:32:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:43.351858 | orchestrator | 2026-01-01 03:32:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:43.351960 | orchestrator | 2026-01-01 03:32:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:46.399332 | orchestrator | 2026-01-01 03:32:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:46.400710 | orchestrator | 2026-01-01 03:32:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:46.400741 | orchestrator | 2026-01-01 03:32:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:49.452449 | orchestrator | 2026-01-01 03:32:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:49.454418 | orchestrator | 2026-01-01 03:32:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:49.454454 | orchestrator | 2026-01-01 03:32:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:52.509268 | orchestrator | 2026-01-01 03:32:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:52.509859 | orchestrator | 2026-01-01 03:32:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:52.509894 | orchestrator | 2026-01-01 03:32:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:55.560781 | orchestrator | 2026-01-01 03:32:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:55.562572 | orchestrator | 2026-01-01 03:32:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:55.562663 | orchestrator | 2026-01-01 03:32:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:58.611415 | orchestrator | 2026-01-01 03:32:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:32:58.614779 | orchestrator | 2026-01-01 03:32:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:32:58.614865 | orchestrator | 2026-01-01 03:32:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:01.675025 | orchestrator | 2026-01-01 03:33:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:01.676546 | orchestrator | 2026-01-01 03:33:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:01.676752 | orchestrator | 2026-01-01 03:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:04.725942 | orchestrator | 2026-01-01 03:33:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:04.727974 | orchestrator | 2026-01-01 03:33:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:04.728109 | orchestrator | 2026-01-01 03:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:07.780895 | orchestrator | 2026-01-01 03:33:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:07.783012 | orchestrator | 2026-01-01 03:33:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:07.783150 | orchestrator | 2026-01-01 03:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:10.841372 | orchestrator | 2026-01-01 03:33:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:10.844490 | orchestrator | 2026-01-01 03:33:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:10.844605 | orchestrator | 2026-01-01 03:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:13.899914 | orchestrator | 2026-01-01 03:33:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:13.902915 | orchestrator | 2026-01-01 03:33:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:13.902948 | orchestrator | 2026-01-01 03:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:16.960682 | orchestrator | 2026-01-01 03:33:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:16.961757 | orchestrator | 2026-01-01 03:33:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:16.961793 | orchestrator | 2026-01-01 03:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:20.014922 | orchestrator | 2026-01-01 03:33:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:20.019431 | orchestrator | 2026-01-01 03:33:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:20.019478 | orchestrator | 2026-01-01 03:33:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:23.058972 | orchestrator | 2026-01-01 03:33:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:23.060840 | orchestrator | 2026-01-01 03:33:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:23.060870 | orchestrator | 2026-01-01 03:33:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:26.105690 | orchestrator | 2026-01-01 03:33:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:26.108502 | orchestrator | 2026-01-01 03:33:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:26.108542 | orchestrator | 2026-01-01 03:33:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:29.159370 | orchestrator | 2026-01-01 03:33:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:29.161534 | orchestrator | 2026-01-01 03:33:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:29.161588 | orchestrator | 2026-01-01 03:33:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:32.211189 | orchestrator | 2026-01-01 03:33:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:32.214622 | orchestrator | 2026-01-01 03:33:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:32.214661 | orchestrator | 2026-01-01 03:33:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:35.266480 | orchestrator | 2026-01-01 03:33:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:35.269066 | orchestrator | 2026-01-01 03:33:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:35.269237 | orchestrator | 2026-01-01 03:33:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:38.311020 | orchestrator | 2026-01-01 03:33:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:38.312067 | orchestrator | 2026-01-01 03:33:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:38.312096 | orchestrator | 2026-01-01 03:33:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:41.365416 | orchestrator | 2026-01-01 03:33:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:41.366503 | orchestrator | 2026-01-01 03:33:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:41.366666 | orchestrator | 2026-01-01 03:33:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:44.417201 | orchestrator | 2026-01-01 03:33:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:44.420253 | orchestrator | 2026-01-01 03:33:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:44.420384 | orchestrator | 2026-01-01 03:33:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:47.474643 | orchestrator | 2026-01-01 03:33:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:47.476415 | orchestrator | 2026-01-01 03:33:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:47.476793 | orchestrator | 2026-01-01 03:33:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:50.528973 | orchestrator | 2026-01-01 03:33:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:50.531355 | orchestrator | 2026-01-01 03:33:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:50.531402 | orchestrator | 2026-01-01 03:33:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:53.573351 | orchestrator | 2026-01-01 03:33:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:53.575073 | orchestrator | 2026-01-01 03:33:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:53.575127 | orchestrator | 2026-01-01 03:33:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:56.618768 | orchestrator | 2026-01-01 03:33:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:56.620219 | orchestrator | 2026-01-01 03:33:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:56.620364 | orchestrator | 2026-01-01 03:33:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:59.663344 | orchestrator | 2026-01-01 03:33:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:33:59.664474 | orchestrator | 2026-01-01 03:33:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:33:59.664511 | orchestrator | 2026-01-01 03:33:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:02.711918 | orchestrator | 2026-01-01 03:34:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:02.714934 | orchestrator | 2026-01-01 03:34:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:02.715010 | orchestrator | 2026-01-01 03:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:05.765044 | orchestrator | 2026-01-01 03:34:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:05.768508 | orchestrator | 2026-01-01 03:34:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:05.768570 | orchestrator | 2026-01-01 03:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:08.808611 | orchestrator | 2026-01-01 03:34:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:08.810009 | orchestrator | 2026-01-01 03:34:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:08.810060 | orchestrator | 2026-01-01 03:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:11.869133 | orchestrator | 2026-01-01 03:34:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:11.872612 | orchestrator | 2026-01-01 03:34:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:11.872649 | orchestrator | 2026-01-01 03:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:14.933988 | orchestrator | 2026-01-01 03:34:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:14.936228 | orchestrator | 2026-01-01 03:34:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:14.936302 | orchestrator | 2026-01-01 03:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:17.990356 | orchestrator | 2026-01-01 03:34:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:17.991901 | orchestrator | 2026-01-01 03:34:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:17.991925 | orchestrator | 2026-01-01 03:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:21.045067 | orchestrator | 2026-01-01 03:34:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:21.045720 | orchestrator | 2026-01-01 03:34:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:21.045807 | orchestrator | 2026-01-01 03:34:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:24.093496 | orchestrator | 2026-01-01 03:34:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:24.096006 | orchestrator | 2026-01-01 03:34:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:24.096364 | orchestrator | 2026-01-01 03:34:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:27.143056 | orchestrator | 2026-01-01 03:34:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:27.146373 | orchestrator | 2026-01-01 03:34:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:27.146458 | orchestrator | 2026-01-01 03:34:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:30.187578 | orchestrator | 2026-01-01 03:34:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:30.188759 | orchestrator | 2026-01-01 03:34:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:30.188798 | orchestrator | 2026-01-01 03:34:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:33.236293 | orchestrator | 2026-01-01 03:34:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:33.237830 | orchestrator | 2026-01-01 03:34:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:33.237876 | orchestrator | 2026-01-01 03:34:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:36.283732 | orchestrator | 2026-01-01 03:34:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:36.286212 | orchestrator | 2026-01-01 03:34:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:36.286360 | orchestrator | 2026-01-01 03:34:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:39.334745 | orchestrator | 2026-01-01 03:34:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:39.335555 | orchestrator | 2026-01-01 03:34:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:39.335623 | orchestrator | 2026-01-01 03:34:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:42.391334 | orchestrator | 2026-01-01 03:34:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:42.392450 | orchestrator | 2026-01-01 03:34:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:42.392643 | orchestrator | 2026-01-01 03:34:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:45.436002 | orchestrator | 2026-01-01 03:34:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:45.438911 | orchestrator | 2026-01-01 03:34:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:45.439159 | orchestrator | 2026-01-01 03:34:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:48.491786 | orchestrator | 2026-01-01 03:34:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:48.493869 | orchestrator | 2026-01-01 03:34:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:48.493924 | orchestrator | 2026-01-01 03:34:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:51.541347 | orchestrator | 2026-01-01 03:34:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:51.542755 | orchestrator | 2026-01-01 03:34:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:51.542803 | orchestrator | 2026-01-01 03:34:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:54.598839 | orchestrator | 2026-01-01 03:34:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:54.601744 | orchestrator | 2026-01-01 03:34:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:54.601778 | orchestrator | 2026-01-01 03:34:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:57.649417 | orchestrator | 2026-01-01 03:34:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:34:57.653596 | orchestrator | 2026-01-01 03:34:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:34:57.653626 | orchestrator | 2026-01-01 03:34:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:00.709692 | orchestrator | 2026-01-01 03:35:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:00.711275 | orchestrator | 2026-01-01 03:35:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:00.711318 | orchestrator | 2026-01-01 03:35:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:03.771998 | orchestrator | 2026-01-01 03:35:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:03.775046 | orchestrator | 2026-01-01 03:35:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:03.775213 | orchestrator | 2026-01-01 03:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:06.826992 | orchestrator | 2026-01-01 03:35:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:06.828496 | orchestrator | 2026-01-01 03:35:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:06.828548 | orchestrator | 2026-01-01 03:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:09.876057 | orchestrator | 2026-01-01 03:35:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:09.878282 | orchestrator | 2026-01-01 03:35:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:09.878394 | orchestrator | 2026-01-01 03:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:12.930331 | orchestrator | 2026-01-01 03:35:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:12.932459 | orchestrator | 2026-01-01 03:35:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:12.932517 | orchestrator | 2026-01-01 03:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:15.985794 | orchestrator | 2026-01-01 03:35:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:15.987666 | orchestrator | 2026-01-01 03:35:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:15.987980 | orchestrator | 2026-01-01 03:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:19.034963 | orchestrator | 2026-01-01 03:35:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:19.042065 | orchestrator | 2026-01-01 03:35:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:19.042135 | orchestrator | 2026-01-01 03:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:22.087417 | orchestrator | 2026-01-01 03:35:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:22.090358 | orchestrator | 2026-01-01 03:35:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:22.090457 | orchestrator | 2026-01-01 03:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:25.130797 | orchestrator | 2026-01-01 03:35:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:25.132134 | orchestrator | 2026-01-01 03:35:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:25.132272 | orchestrator | 2026-01-01 03:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:28.185572 | orchestrator | 2026-01-01 03:35:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:28.187510 | orchestrator | 2026-01-01 03:35:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:28.187583 | orchestrator | 2026-01-01 03:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:31.241015 | orchestrator | 2026-01-01 03:35:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:31.244156 | orchestrator | 2026-01-01 03:35:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:31.244393 | orchestrator | 2026-01-01 03:35:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:34.300859 | orchestrator | 2026-01-01 03:35:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:34.302765 | orchestrator | 2026-01-01 03:35:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:34.302817 | orchestrator | 2026-01-01 03:35:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:37.359052 | orchestrator | 2026-01-01 03:35:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:37.361341 | orchestrator | 2026-01-01 03:35:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:37.361364 | orchestrator | 2026-01-01 03:35:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:40.415079 | orchestrator | 2026-01-01 03:35:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:40.416675 | orchestrator | 2026-01-01 03:35:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:40.416709 | orchestrator | 2026-01-01 03:35:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:43.474690 | orchestrator | 2026-01-01 03:35:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:43.476377 | orchestrator | 2026-01-01 03:35:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:43.477101 | orchestrator | 2026-01-01 03:35:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:46.520410 | orchestrator | 2026-01-01 03:35:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:46.522473 | orchestrator | 2026-01-01 03:35:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:46.522512 | orchestrator | 2026-01-01 03:35:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:49.571304 | orchestrator | 2026-01-01 03:35:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:49.575832 | orchestrator | 2026-01-01 03:35:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:49.575870 | orchestrator | 2026-01-01 03:35:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:52.616000 | orchestrator | 2026-01-01 03:35:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:52.617689 | orchestrator | 2026-01-01 03:35:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:52.617729 | orchestrator | 2026-01-01 03:35:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:55.667429 | orchestrator | 2026-01-01 03:35:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:55.668938 | orchestrator | 2026-01-01 03:35:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:55.669000 | orchestrator | 2026-01-01 03:35:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:58.726726 | orchestrator | 2026-01-01 03:35:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:35:58.726911 | orchestrator | 2026-01-01 03:35:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:35:58.726933 | orchestrator | 2026-01-01 03:35:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:01.782448 | orchestrator | 2026-01-01 03:36:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:01.784707 | orchestrator | 2026-01-01 03:36:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:01.785070 | orchestrator | 2026-01-01 03:36:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:04.834343 | orchestrator | 2026-01-01 03:36:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:04.837956 | orchestrator | 2026-01-01 03:36:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:04.838065 | orchestrator | 2026-01-01 03:36:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:07.896177 | orchestrator | 2026-01-01 03:36:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:07.898804 | orchestrator | 2026-01-01 03:36:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:07.898837 | orchestrator | 2026-01-01 03:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:10.952513 | orchestrator | 2026-01-01 03:36:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:10.953988 | orchestrator | 2026-01-01 03:36:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:10.954080 | orchestrator | 2026-01-01 03:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:14.012583 | orchestrator | 2026-01-01 03:36:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:14.015371 | orchestrator | 2026-01-01 03:36:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:14.015513 | orchestrator | 2026-01-01 03:36:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:17.065479 | orchestrator | 2026-01-01 03:36:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:17.067017 | orchestrator | 2026-01-01 03:36:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:17.067076 | orchestrator | 2026-01-01 03:36:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:20.120316 | orchestrator | 2026-01-01 03:36:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:20.121411 | orchestrator | 2026-01-01 03:36:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:20.121529 | orchestrator | 2026-01-01 03:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:23.175391 | orchestrator | 2026-01-01 03:36:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:23.177118 | orchestrator | 2026-01-01 03:36:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:23.177172 | orchestrator | 2026-01-01 03:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:26.226282 | orchestrator | 2026-01-01 03:36:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:26.230430 | orchestrator | 2026-01-01 03:36:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:26.230460 | orchestrator | 2026-01-01 03:36:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:29.274577 | orchestrator | 2026-01-01 03:36:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:29.276268 | orchestrator | 2026-01-01 03:36:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:29.276373 | orchestrator | 2026-01-01 03:36:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:32.328346 | orchestrator | 2026-01-01 03:36:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:32.330403 | orchestrator | 2026-01-01 03:36:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:32.330440 | orchestrator | 2026-01-01 03:36:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:35.376888 | orchestrator | 2026-01-01 03:36:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:35.379476 | orchestrator | 2026-01-01 03:36:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:35.379685 | orchestrator | 2026-01-01 03:36:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:38.427758 | orchestrator | 2026-01-01 03:36:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:38.430671 | orchestrator | 2026-01-01 03:36:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:38.430710 | orchestrator | 2026-01-01 03:36:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:41.475019 | orchestrator | 2026-01-01 03:36:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:41.477108 | orchestrator | 2026-01-01 03:36:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:41.477165 | orchestrator | 2026-01-01 03:36:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:44.525101 | orchestrator | 2026-01-01 03:36:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:44.527413 | orchestrator | 2026-01-01 03:36:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:44.527463 | orchestrator | 2026-01-01 03:36:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:47.572972 | orchestrator | 2026-01-01 03:36:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:47.574436 | orchestrator | 2026-01-01 03:36:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:47.574471 | orchestrator | 2026-01-01 03:36:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:50.640868 | orchestrator | 2026-01-01 03:36:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:50.644338 | orchestrator | 2026-01-01 03:36:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:50.644386 | orchestrator | 2026-01-01 03:36:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:53.691904 | orchestrator | 2026-01-01 03:36:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:53.693816 | orchestrator | 2026-01-01 03:36:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:53.693887 | orchestrator | 2026-01-01 03:36:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:56.744258 | orchestrator | 2026-01-01 03:36:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:56.747229 | orchestrator | 2026-01-01 03:36:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:56.747376 | orchestrator | 2026-01-01 03:36:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:59.792726 | orchestrator | 2026-01-01 03:36:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:36:59.796136 | orchestrator | 2026-01-01 03:36:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:36:59.796240 | orchestrator | 2026-01-01 03:36:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:02.845438 | orchestrator | 2026-01-01 03:37:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:02.847832 | orchestrator | 2026-01-01 03:37:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:02.847884 | orchestrator | 2026-01-01 03:37:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:05.903170 | orchestrator | 2026-01-01 03:37:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:05.905296 | orchestrator | 2026-01-01 03:37:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:05.905332 | orchestrator | 2026-01-01 03:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:08.952996 | orchestrator | 2026-01-01 03:37:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:08.954396 | orchestrator | 2026-01-01 03:37:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:08.954448 | orchestrator | 2026-01-01 03:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:11.993047 | orchestrator | 2026-01-01 03:37:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:11.993809 | orchestrator | 2026-01-01 03:37:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:11.993842 | orchestrator | 2026-01-01 03:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:15.043799 | orchestrator | 2026-01-01 03:37:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:15.044658 | orchestrator | 2026-01-01 03:37:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:15.044695 | orchestrator | 2026-01-01 03:37:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:18.090243 | orchestrator | 2026-01-01 03:37:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:18.092028 | orchestrator | 2026-01-01 03:37:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:18.092116 | orchestrator | 2026-01-01 03:37:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:21.147321 | orchestrator | 2026-01-01 03:37:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:21.149699 | orchestrator | 2026-01-01 03:37:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:21.149743 | orchestrator | 2026-01-01 03:37:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:24.198956 | orchestrator | 2026-01-01 03:37:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:24.201156 | orchestrator | 2026-01-01 03:37:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:24.201312 | orchestrator | 2026-01-01 03:37:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:27.244362 | orchestrator | 2026-01-01 03:37:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:27.245886 | orchestrator | 2026-01-01 03:37:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:27.246003 | orchestrator | 2026-01-01 03:37:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:30.293599 | orchestrator | 2026-01-01 03:37:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:30.295144 | orchestrator | 2026-01-01 03:37:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:30.295242 | orchestrator | 2026-01-01 03:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:33.345967 | orchestrator | 2026-01-01 03:37:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:33.347534 | orchestrator | 2026-01-01 03:37:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:33.347587 | orchestrator | 2026-01-01 03:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:36.394296 | orchestrator | 2026-01-01 03:37:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:36.395302 | orchestrator | 2026-01-01 03:37:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:36.395337 | orchestrator | 2026-01-01 03:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:39.446393 | orchestrator | 2026-01-01 03:37:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:39.448782 | orchestrator | 2026-01-01 03:37:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:39.448817 | orchestrator | 2026-01-01 03:37:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:42.499443 | orchestrator | 2026-01-01 03:37:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:42.502565 | orchestrator | 2026-01-01 03:37:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:42.502709 | orchestrator | 2026-01-01 03:37:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:45.552700 | orchestrator | 2026-01-01 03:37:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:45.553430 | orchestrator | 2026-01-01 03:37:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:45.553473 | orchestrator | 2026-01-01 03:37:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:48.596738 | orchestrator | 2026-01-01 03:37:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:48.599888 | orchestrator | 2026-01-01 03:37:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:48.599967 | orchestrator | 2026-01-01 03:37:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:51.651391 | orchestrator | 2026-01-01 03:37:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:51.652811 | orchestrator | 2026-01-01 03:37:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:51.652950 | orchestrator | 2026-01-01 03:37:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:54.702741 | orchestrator | 2026-01-01 03:37:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:54.703937 | orchestrator | 2026-01-01 03:37:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:54.703975 | orchestrator | 2026-01-01 03:37:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:57.754508 | orchestrator | 2026-01-01 03:37:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:37:57.756279 | orchestrator | 2026-01-01 03:37:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:37:57.756720 | orchestrator | 2026-01-01 03:37:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:00.804714 | orchestrator | 2026-01-01 03:38:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:00.805240 | orchestrator | 2026-01-01 03:38:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:00.805392 | orchestrator | 2026-01-01 03:38:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:03.859234 | orchestrator | 2026-01-01 03:38:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:03.861618 | orchestrator | 2026-01-01 03:38:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:03.861674 | orchestrator | 2026-01-01 03:38:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:06.917128 | orchestrator | 2026-01-01 03:38:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:06.920348 | orchestrator | 2026-01-01 03:38:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:06.920430 | orchestrator | 2026-01-01 03:38:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:09.969332 | orchestrator | 2026-01-01 03:38:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:09.972419 | orchestrator | 2026-01-01 03:38:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:09.972481 | orchestrator | 2026-01-01 03:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:13.021513 | orchestrator | 2026-01-01 03:38:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:13.024013 | orchestrator | 2026-01-01 03:38:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:13.024094 | orchestrator | 2026-01-01 03:38:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:16.067761 | orchestrator | 2026-01-01 03:38:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:16.069193 | orchestrator | 2026-01-01 03:38:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:16.069237 | orchestrator | 2026-01-01 03:38:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:19.120364 | orchestrator | 2026-01-01 03:38:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:19.122516 | orchestrator | 2026-01-01 03:38:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:19.122596 | orchestrator | 2026-01-01 03:38:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:22.174628 | orchestrator | 2026-01-01 03:38:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:22.177195 | orchestrator | 2026-01-01 03:38:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:22.177246 | orchestrator | 2026-01-01 03:38:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:25.223859 | orchestrator | 2026-01-01 03:38:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:25.225529 | orchestrator | 2026-01-01 03:38:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:25.225578 | orchestrator | 2026-01-01 03:38:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:28.279442 | orchestrator | 2026-01-01 03:38:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:28.280861 | orchestrator | 2026-01-01 03:38:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:28.280896 | orchestrator | 2026-01-01 03:38:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:31.332832 | orchestrator | 2026-01-01 03:38:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:31.335490 | orchestrator | 2026-01-01 03:38:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:31.335535 | orchestrator | 2026-01-01 03:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:34.389762 | orchestrator | 2026-01-01 03:38:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:34.392382 | orchestrator | 2026-01-01 03:38:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:34.392416 | orchestrator | 2026-01-01 03:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:37.449692 | orchestrator | 2026-01-01 03:38:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:37.451922 | orchestrator | 2026-01-01 03:38:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:37.451958 | orchestrator | 2026-01-01 03:38:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:40.496762 | orchestrator | 2026-01-01 03:38:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:40.498321 | orchestrator | 2026-01-01 03:38:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:40.498379 | orchestrator | 2026-01-01 03:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:43.549904 | orchestrator | 2026-01-01 03:38:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:43.552435 | orchestrator | 2026-01-01 03:38:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:43.552755 | orchestrator | 2026-01-01 03:38:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:46.600426 | orchestrator | 2026-01-01 03:38:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:46.602006 | orchestrator | 2026-01-01 03:38:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:46.602058 | orchestrator | 2026-01-01 03:38:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:49.650944 | orchestrator | 2026-01-01 03:38:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:49.653839 | orchestrator | 2026-01-01 03:38:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:49.653910 | orchestrator | 2026-01-01 03:38:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:52.701751 | orchestrator | 2026-01-01 03:38:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:52.703203 | orchestrator | 2026-01-01 03:38:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:52.703392 | orchestrator | 2026-01-01 03:38:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:55.753960 | orchestrator | 2026-01-01 03:38:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:55.755535 | orchestrator | 2026-01-01 03:38:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:55.755582 | orchestrator | 2026-01-01 03:38:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:58.796538 | orchestrator | 2026-01-01 03:38:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:38:58.799032 | orchestrator | 2026-01-01 03:38:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:38:58.799093 | orchestrator | 2026-01-01 03:38:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:01.839735 | orchestrator | 2026-01-01 03:39:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:01.840251 | orchestrator | 2026-01-01 03:39:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:01.840328 | orchestrator | 2026-01-01 03:39:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:04.889227 | orchestrator | 2026-01-01 03:39:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:04.890586 | orchestrator | 2026-01-01 03:39:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:04.890679 | orchestrator | 2026-01-01 03:39:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:07.932817 | orchestrator | 2026-01-01 03:39:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:07.934347 | orchestrator | 2026-01-01 03:39:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:07.934405 | orchestrator | 2026-01-01 03:39:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:10.984787 | orchestrator | 2026-01-01 03:39:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:10.985706 | orchestrator | 2026-01-01 03:39:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:10.985737 | orchestrator | 2026-01-01 03:39:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:14.032984 | orchestrator | 2026-01-01 03:39:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:14.034445 | orchestrator | 2026-01-01 03:39:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:14.034615 | orchestrator | 2026-01-01 03:39:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:17.080667 | orchestrator | 2026-01-01 03:39:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:17.082618 | orchestrator | 2026-01-01 03:39:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:17.082660 | orchestrator | 2026-01-01 03:39:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:20.127657 | orchestrator | 2026-01-01 03:39:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:20.129041 | orchestrator | 2026-01-01 03:39:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:20.129063 | orchestrator | 2026-01-01 03:39:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:23.167458 | orchestrator | 2026-01-01 03:39:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:23.168389 | orchestrator | 2026-01-01 03:39:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:23.168428 | orchestrator | 2026-01-01 03:39:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:26.207543 | orchestrator | 2026-01-01 03:39:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:26.211041 | orchestrator | 2026-01-01 03:39:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:26.211077 | orchestrator | 2026-01-01 03:39:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:29.260217 | orchestrator | 2026-01-01 03:39:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:29.260898 | orchestrator | 2026-01-01 03:39:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:29.261073 | orchestrator | 2026-01-01 03:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:32.303137 | orchestrator | 2026-01-01 03:39:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:32.305939 | orchestrator | 2026-01-01 03:39:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:32.306189 | orchestrator | 2026-01-01 03:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:35.354485 | orchestrator | 2026-01-01 03:39:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:35.357338 | orchestrator | 2026-01-01 03:39:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:35.357375 | orchestrator | 2026-01-01 03:39:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:38.410808 | orchestrator | 2026-01-01 03:39:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:38.413046 | orchestrator | 2026-01-01 03:39:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:38.413085 | orchestrator | 2026-01-01 03:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:41.456083 | orchestrator | 2026-01-01 03:39:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:41.458875 | orchestrator | 2026-01-01 03:39:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:41.458917 | orchestrator | 2026-01-01 03:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:44.507262 | orchestrator | 2026-01-01 03:39:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:44.510444 | orchestrator | 2026-01-01 03:39:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:44.510495 | orchestrator | 2026-01-01 03:39:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:47.562614 | orchestrator | 2026-01-01 03:39:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:47.563646 | orchestrator | 2026-01-01 03:39:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:47.563694 | orchestrator | 2026-01-01 03:39:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:50.618201 | orchestrator | 2026-01-01 03:39:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:50.621208 | orchestrator | 2026-01-01 03:39:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:50.621355 | orchestrator | 2026-01-01 03:39:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:53.678921 | orchestrator | 2026-01-01 03:39:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:53.680886 | orchestrator | 2026-01-01 03:39:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:53.680910 | orchestrator | 2026-01-01 03:39:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:56.733562 | orchestrator | 2026-01-01 03:39:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:56.735746 | orchestrator | 2026-01-01 03:39:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:56.735837 | orchestrator | 2026-01-01 03:39:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:59.787190 | orchestrator | 2026-01-01 03:39:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:39:59.789150 | orchestrator | 2026-01-01 03:39:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:39:59.789492 | orchestrator | 2026-01-01 03:39:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:02.844977 | orchestrator | 2026-01-01 03:40:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:02.847586 | orchestrator | 2026-01-01 03:40:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:02.847637 | orchestrator | 2026-01-01 03:40:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:05.899817 | orchestrator | 2026-01-01 03:40:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:05.903018 | orchestrator | 2026-01-01 03:40:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:05.903102 | orchestrator | 2026-01-01 03:40:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:08.949001 | orchestrator | 2026-01-01 03:40:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:08.952727 | orchestrator | 2026-01-01 03:40:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:08.952805 | orchestrator | 2026-01-01 03:40:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:12.003850 | orchestrator | 2026-01-01 03:40:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:12.008414 | orchestrator | 2026-01-01 03:40:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:12.008470 | orchestrator | 2026-01-01 03:40:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:15.068362 | orchestrator | 2026-01-01 03:40:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:15.070593 | orchestrator | 2026-01-01 03:40:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:15.070663 | orchestrator | 2026-01-01 03:40:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:18.115452 | orchestrator | 2026-01-01 03:40:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:18.118759 | orchestrator | 2026-01-01 03:40:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:18.118807 | orchestrator | 2026-01-01 03:40:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:21.169660 | orchestrator | 2026-01-01 03:40:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:21.171845 | orchestrator | 2026-01-01 03:40:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:21.171958 | orchestrator | 2026-01-01 03:40:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:24.228893 | orchestrator | 2026-01-01 03:40:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:24.230571 | orchestrator | 2026-01-01 03:40:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:24.230605 | orchestrator | 2026-01-01 03:40:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:27.273551 | orchestrator | 2026-01-01 03:40:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:27.275269 | orchestrator | 2026-01-01 03:40:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:27.275540 | orchestrator | 2026-01-01 03:40:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:30.321225 | orchestrator | 2026-01-01 03:40:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:30.323624 | orchestrator | 2026-01-01 03:40:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:30.323659 | orchestrator | 2026-01-01 03:40:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:33.376797 | orchestrator | 2026-01-01 03:40:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:33.379533 | orchestrator | 2026-01-01 03:40:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:33.379627 | orchestrator | 2026-01-01 03:40:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:36.432428 | orchestrator | 2026-01-01 03:40:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:36.435854 | orchestrator | 2026-01-01 03:40:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:36.435920 | orchestrator | 2026-01-01 03:40:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:39.483936 | orchestrator | 2026-01-01 03:40:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:39.486776 | orchestrator | 2026-01-01 03:40:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:39.486849 | orchestrator | 2026-01-01 03:40:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:42.536795 | orchestrator | 2026-01-01 03:40:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:42.539365 | orchestrator | 2026-01-01 03:40:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:42.539406 | orchestrator | 2026-01-01 03:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:45.588642 | orchestrator | 2026-01-01 03:40:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:45.589145 | orchestrator | 2026-01-01 03:40:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:45.589177 | orchestrator | 2026-01-01 03:40:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:48.643121 | orchestrator | 2026-01-01 03:40:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:48.644032 | orchestrator | 2026-01-01 03:40:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:48.644073 | orchestrator | 2026-01-01 03:40:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:51.686194 | orchestrator | 2026-01-01 03:40:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:51.686764 | orchestrator | 2026-01-01 03:40:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:51.686835 | orchestrator | 2026-01-01 03:40:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:54.734987 | orchestrator | 2026-01-01 03:40:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:54.739331 | orchestrator | 2026-01-01 03:40:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:54.739377 | orchestrator | 2026-01-01 03:40:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:57.793448 | orchestrator | 2026-01-01 03:40:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:40:57.797063 | orchestrator | 2026-01-01 03:40:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:40:57.797144 | orchestrator | 2026-01-01 03:40:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:00.846579 | orchestrator | 2026-01-01 03:41:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:00.849178 | orchestrator | 2026-01-01 03:41:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:00.849252 | orchestrator | 2026-01-01 03:41:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:03.893725 | orchestrator | 2026-01-01 03:41:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:03.897635 | orchestrator | 2026-01-01 03:41:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:03.897808 | orchestrator | 2026-01-01 03:41:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:06.953671 | orchestrator | 2026-01-01 03:41:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:06.954712 | orchestrator | 2026-01-01 03:41:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:06.954752 | orchestrator | 2026-01-01 03:41:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:10.023697 | orchestrator | 2026-01-01 03:41:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:10.023814 | orchestrator | 2026-01-01 03:41:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:10.023830 | orchestrator | 2026-01-01 03:41:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:13.070854 | orchestrator | 2026-01-01 03:41:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:13.072679 | orchestrator | 2026-01-01 03:41:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:13.072729 | orchestrator | 2026-01-01 03:41:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:16.118656 | orchestrator | 2026-01-01 03:41:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:16.120536 | orchestrator | 2026-01-01 03:41:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:16.120603 | orchestrator | 2026-01-01 03:41:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:19.166680 | orchestrator | 2026-01-01 03:41:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:19.169300 | orchestrator | 2026-01-01 03:41:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:19.169572 | orchestrator | 2026-01-01 03:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:22.215222 | orchestrator | 2026-01-01 03:41:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:22.217568 | orchestrator | 2026-01-01 03:41:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:22.217680 | orchestrator | 2026-01-01 03:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:25.271059 | orchestrator | 2026-01-01 03:41:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:25.272137 | orchestrator | 2026-01-01 03:41:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:25.272533 | orchestrator | 2026-01-01 03:41:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:28.311582 | orchestrator | 2026-01-01 03:41:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:28.312177 | orchestrator | 2026-01-01 03:41:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:28.312212 | orchestrator | 2026-01-01 03:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:31.354455 | orchestrator | 2026-01-01 03:41:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:31.355970 | orchestrator | 2026-01-01 03:41:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:31.356048 | orchestrator | 2026-01-01 03:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:34.409722 | orchestrator | 2026-01-01 03:41:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:34.413303 | orchestrator | 2026-01-01 03:41:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:34.413386 | orchestrator | 2026-01-01 03:41:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:37.465442 | orchestrator | 2026-01-01 03:41:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:37.468138 | orchestrator | 2026-01-01 03:41:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:37.468190 | orchestrator | 2026-01-01 03:41:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:40.518102 | orchestrator | 2026-01-01 03:41:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:40.519694 | orchestrator | 2026-01-01 03:41:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:40.519729 | orchestrator | 2026-01-01 03:41:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:43.577638 | orchestrator | 2026-01-01 03:41:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:43.579172 | orchestrator | 2026-01-01 03:41:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:43.579207 | orchestrator | 2026-01-01 03:41:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:46.629742 | orchestrator | 2026-01-01 03:41:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:46.633898 | orchestrator | 2026-01-01 03:41:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:46.633964 | orchestrator | 2026-01-01 03:41:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:49.686719 | orchestrator | 2026-01-01 03:41:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:49.688569 | orchestrator | 2026-01-01 03:41:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:49.689193 | orchestrator | 2026-01-01 03:41:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:52.736527 | orchestrator | 2026-01-01 03:41:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:52.738466 | orchestrator | 2026-01-01 03:41:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:52.738503 | orchestrator | 2026-01-01 03:41:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:55.794625 | orchestrator | 2026-01-01 03:41:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:55.797946 | orchestrator | 2026-01-01 03:41:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:55.797992 | orchestrator | 2026-01-01 03:41:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:58.856171 | orchestrator | 2026-01-01 03:41:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:41:58.857432 | orchestrator | 2026-01-01 03:41:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:41:58.857464 | orchestrator | 2026-01-01 03:41:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:01.910108 | orchestrator | 2026-01-01 03:42:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:01.913020 | orchestrator | 2026-01-01 03:42:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:01.913104 | orchestrator | 2026-01-01 03:42:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:04.963039 | orchestrator | 2026-01-01 03:42:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:04.963518 | orchestrator | 2026-01-01 03:42:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:04.963548 | orchestrator | 2026-01-01 03:42:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:08.004660 | orchestrator | 2026-01-01 03:42:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:08.007617 | orchestrator | 2026-01-01 03:42:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:08.007649 | orchestrator | 2026-01-01 03:42:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:11.059218 | orchestrator | 2026-01-01 03:42:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:11.060863 | orchestrator | 2026-01-01 03:42:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:11.060927 | orchestrator | 2026-01-01 03:42:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:14.109072 | orchestrator | 2026-01-01 03:42:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:14.115188 | orchestrator | 2026-01-01 03:42:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:14.115220 | orchestrator | 2026-01-01 03:42:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:17.180368 | orchestrator | 2026-01-01 03:42:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:17.181534 | orchestrator | 2026-01-01 03:42:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:17.182285 | orchestrator | 2026-01-01 03:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:20.239812 | orchestrator | 2026-01-01 03:42:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:20.242244 | orchestrator | 2026-01-01 03:42:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:20.242326 | orchestrator | 2026-01-01 03:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:23.295217 | orchestrator | 2026-01-01 03:42:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:23.297954 | orchestrator | 2026-01-01 03:42:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:23.298001 | orchestrator | 2026-01-01 03:42:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:26.347919 | orchestrator | 2026-01-01 03:42:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:26.348430 | orchestrator | 2026-01-01 03:42:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:26.348619 | orchestrator | 2026-01-01 03:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:29.398944 | orchestrator | 2026-01-01 03:42:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:29.400180 | orchestrator | 2026-01-01 03:42:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:29.400211 | orchestrator | 2026-01-01 03:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:32.444662 | orchestrator | 2026-01-01 03:42:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:32.446650 | orchestrator | 2026-01-01 03:42:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:32.447193 | orchestrator | 2026-01-01 03:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:35.500311 | orchestrator | 2026-01-01 03:42:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:35.501461 | orchestrator | 2026-01-01 03:42:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:35.501498 | orchestrator | 2026-01-01 03:42:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:38.544434 | orchestrator | 2026-01-01 03:42:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:38.545384 | orchestrator | 2026-01-01 03:42:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:38.545416 | orchestrator | 2026-01-01 03:42:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:41.588204 | orchestrator | 2026-01-01 03:42:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:41.589745 | orchestrator | 2026-01-01 03:42:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:41.589793 | orchestrator | 2026-01-01 03:42:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:44.645259 | orchestrator | 2026-01-01 03:42:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:44.647427 | orchestrator | 2026-01-01 03:42:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:44.647461 | orchestrator | 2026-01-01 03:42:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:47.691492 | orchestrator | 2026-01-01 03:42:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:47.693133 | orchestrator | 2026-01-01 03:42:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:47.693164 | orchestrator | 2026-01-01 03:42:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:50.748295 | orchestrator | 2026-01-01 03:42:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:50.750130 | orchestrator | 2026-01-01 03:42:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:50.750163 | orchestrator | 2026-01-01 03:42:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:53.809949 | orchestrator | 2026-01-01 03:42:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:53.812190 | orchestrator | 2026-01-01 03:42:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:53.812281 | orchestrator | 2026-01-01 03:42:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:56.857303 | orchestrator | 2026-01-01 03:42:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:56.857974 | orchestrator | 2026-01-01 03:42:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:56.858009 | orchestrator | 2026-01-01 03:42:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:59.915280 | orchestrator | 2026-01-01 03:42:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:42:59.920722 | orchestrator | 2026-01-01 03:42:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:42:59.920771 | orchestrator | 2026-01-01 03:42:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:02.962499 | orchestrator | 2026-01-01 03:43:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:02.963625 | orchestrator | 2026-01-01 03:43:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:02.963665 | orchestrator | 2026-01-01 03:43:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:06.021820 | orchestrator | 2026-01-01 03:43:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:06.024250 | orchestrator | 2026-01-01 03:43:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:06.024286 | orchestrator | 2026-01-01 03:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:09.071289 | orchestrator | 2026-01-01 03:43:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:09.072818 | orchestrator | 2026-01-01 03:43:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:09.072925 | orchestrator | 2026-01-01 03:43:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:12.117938 | orchestrator | 2026-01-01 03:43:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:12.120003 | orchestrator | 2026-01-01 03:43:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:12.120037 | orchestrator | 2026-01-01 03:43:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:15.176130 | orchestrator | 2026-01-01 03:43:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:15.180669 | orchestrator | 2026-01-01 03:43:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:15.180749 | orchestrator | 2026-01-01 03:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:18.233248 | orchestrator | 2026-01-01 03:43:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:18.234578 | orchestrator | 2026-01-01 03:43:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:18.234625 | orchestrator | 2026-01-01 03:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:21.276955 | orchestrator | 2026-01-01 03:43:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:21.278743 | orchestrator | 2026-01-01 03:43:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:21.278792 | orchestrator | 2026-01-01 03:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:24.328944 | orchestrator | 2026-01-01 03:43:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:24.330431 | orchestrator | 2026-01-01 03:43:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:24.330527 | orchestrator | 2026-01-01 03:43:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:27.381670 | orchestrator | 2026-01-01 03:43:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:27.382833 | orchestrator | 2026-01-01 03:43:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:27.383599 | orchestrator | 2026-01-01 03:43:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:30.432289 | orchestrator | 2026-01-01 03:43:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:30.434990 | orchestrator | 2026-01-01 03:43:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:30.435081 | orchestrator | 2026-01-01 03:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:33.483177 | orchestrator | 2026-01-01 03:43:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:33.485587 | orchestrator | 2026-01-01 03:43:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:33.485693 | orchestrator | 2026-01-01 03:43:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:36.541256 | orchestrator | 2026-01-01 03:43:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:36.543770 | orchestrator | 2026-01-01 03:43:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:36.543802 | orchestrator | 2026-01-01 03:43:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:39.596912 | orchestrator | 2026-01-01 03:43:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:39.600468 | orchestrator | 2026-01-01 03:43:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:39.600505 | orchestrator | 2026-01-01 03:43:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:42.656519 | orchestrator | 2026-01-01 03:43:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:42.658505 | orchestrator | 2026-01-01 03:43:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:42.658551 | orchestrator | 2026-01-01 03:43:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:45.705483 | orchestrator | 2026-01-01 03:43:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:45.710080 | orchestrator | 2026-01-01 03:43:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:45.710154 | orchestrator | 2026-01-01 03:43:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:48.762905 | orchestrator | 2026-01-01 03:43:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:48.767316 | orchestrator | 2026-01-01 03:43:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:48.767391 | orchestrator | 2026-01-01 03:43:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:51.828787 | orchestrator | 2026-01-01 03:43:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:51.830208 | orchestrator | 2026-01-01 03:43:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:51.830339 | orchestrator | 2026-01-01 03:43:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:54.887451 | orchestrator | 2026-01-01 03:43:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:54.888160 | orchestrator | 2026-01-01 03:43:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:54.888581 | orchestrator | 2026-01-01 03:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:57.929217 | orchestrator | 2026-01-01 03:43:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:43:57.930200 | orchestrator | 2026-01-01 03:43:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:43:57.930298 | orchestrator | 2026-01-01 03:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:00.989265 | orchestrator | 2026-01-01 03:44:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:00.990921 | orchestrator | 2026-01-01 03:44:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:00.990954 | orchestrator | 2026-01-01 03:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:04.041288 | orchestrator | 2026-01-01 03:44:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:04.047694 | orchestrator | 2026-01-01 03:44:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:04.047767 | orchestrator | 2026-01-01 03:44:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:07.086579 | orchestrator | 2026-01-01 03:44:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:07.088070 | orchestrator | 2026-01-01 03:44:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:07.088148 | orchestrator | 2026-01-01 03:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:10.136576 | orchestrator | 2026-01-01 03:44:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:10.137853 | orchestrator | 2026-01-01 03:44:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:10.137889 | orchestrator | 2026-01-01 03:44:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:13.185916 | orchestrator | 2026-01-01 03:44:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:13.188104 | orchestrator | 2026-01-01 03:44:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:13.188154 | orchestrator | 2026-01-01 03:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:16.247029 | orchestrator | 2026-01-01 03:44:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:16.249515 | orchestrator | 2026-01-01 03:44:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:16.249546 | orchestrator | 2026-01-01 03:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:19.298907 | orchestrator | 2026-01-01 03:44:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:19.299818 | orchestrator | 2026-01-01 03:44:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:19.299839 | orchestrator | 2026-01-01 03:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:22.357772 | orchestrator | 2026-01-01 03:44:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:22.358956 | orchestrator | 2026-01-01 03:44:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:22.358987 | orchestrator | 2026-01-01 03:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:25.407283 | orchestrator | 2026-01-01 03:44:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:25.410412 | orchestrator | 2026-01-01 03:44:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:25.410487 | orchestrator | 2026-01-01 03:44:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:28.463646 | orchestrator | 2026-01-01 03:44:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:28.466624 | orchestrator | 2026-01-01 03:44:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:28.466656 | orchestrator | 2026-01-01 03:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:31.512115 | orchestrator | 2026-01-01 03:44:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:31.513931 | orchestrator | 2026-01-01 03:44:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:31.513980 | orchestrator | 2026-01-01 03:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:34.559395 | orchestrator | 2026-01-01 03:44:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:34.561552 | orchestrator | 2026-01-01 03:44:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:34.561586 | orchestrator | 2026-01-01 03:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:37.610880 | orchestrator | 2026-01-01 03:44:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:37.615007 | orchestrator | 2026-01-01 03:44:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:37.615060 | orchestrator | 2026-01-01 03:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:40.674906 | orchestrator | 2026-01-01 03:44:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:40.675922 | orchestrator | 2026-01-01 03:44:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:40.676086 | orchestrator | 2026-01-01 03:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:43.733972 | orchestrator | 2026-01-01 03:44:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:43.736931 | orchestrator | 2026-01-01 03:44:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:43.736980 | orchestrator | 2026-01-01 03:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:46.791288 | orchestrator | 2026-01-01 03:44:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:46.792917 | orchestrator | 2026-01-01 03:44:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:46.792949 | orchestrator | 2026-01-01 03:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:49.849800 | orchestrator | 2026-01-01 03:44:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:49.852624 | orchestrator | 2026-01-01 03:44:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:49.852737 | orchestrator | 2026-01-01 03:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:52.906206 | orchestrator | 2026-01-01 03:44:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:52.907384 | orchestrator | 2026-01-01 03:44:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:52.907403 | orchestrator | 2026-01-01 03:44:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:55.958697 | orchestrator | 2026-01-01 03:44:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:55.959378 | orchestrator | 2026-01-01 03:44:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:55.959408 | orchestrator | 2026-01-01 03:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:59.010545 | orchestrator | 2026-01-01 03:44:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:44:59.012694 | orchestrator | 2026-01-01 03:44:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:44:59.012772 | orchestrator | 2026-01-01 03:44:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:02.072068 | orchestrator | 2026-01-01 03:45:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:02.073711 | orchestrator | 2026-01-01 03:45:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:02.073765 | orchestrator | 2026-01-01 03:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:05.126511 | orchestrator | 2026-01-01 03:45:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:05.128313 | orchestrator | 2026-01-01 03:45:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:05.128394 | orchestrator | 2026-01-01 03:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:08.175926 | orchestrator | 2026-01-01 03:45:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:08.177241 | orchestrator | 2026-01-01 03:45:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:08.177293 | orchestrator | 2026-01-01 03:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:11.221925 | orchestrator | 2026-01-01 03:45:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:11.223561 | orchestrator | 2026-01-01 03:45:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:11.223594 | orchestrator | 2026-01-01 03:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:14.277935 | orchestrator | 2026-01-01 03:45:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:14.279973 | orchestrator | 2026-01-01 03:45:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:14.280011 | orchestrator | 2026-01-01 03:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:17.323560 | orchestrator | 2026-01-01 03:45:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:17.325079 | orchestrator | 2026-01-01 03:45:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:17.325145 | orchestrator | 2026-01-01 03:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:20.375530 | orchestrator | 2026-01-01 03:45:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:20.377477 | orchestrator | 2026-01-01 03:45:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:20.377558 | orchestrator | 2026-01-01 03:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:23.434777 | orchestrator | 2026-01-01 03:45:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:23.435990 | orchestrator | 2026-01-01 03:45:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:23.436037 | orchestrator | 2026-01-01 03:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:26.487336 | orchestrator | 2026-01-01 03:45:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:26.488870 | orchestrator | 2026-01-01 03:45:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:26.488894 | orchestrator | 2026-01-01 03:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:29.552570 | orchestrator | 2026-01-01 03:45:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:29.554730 | orchestrator | 2026-01-01 03:45:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:29.554762 | orchestrator | 2026-01-01 03:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:32.601267 | orchestrator | 2026-01-01 03:45:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:32.603375 | orchestrator | 2026-01-01 03:45:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:32.603410 | orchestrator | 2026-01-01 03:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:35.657101 | orchestrator | 2026-01-01 03:45:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:35.658325 | orchestrator | 2026-01-01 03:45:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:35.658360 | orchestrator | 2026-01-01 03:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:38.708689 | orchestrator | 2026-01-01 03:45:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:38.709694 | orchestrator | 2026-01-01 03:45:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:38.709773 | orchestrator | 2026-01-01 03:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:41.760629 | orchestrator | 2026-01-01 03:45:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:41.764720 | orchestrator | 2026-01-01 03:45:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:41.765138 | orchestrator | 2026-01-01 03:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:44.822097 | orchestrator | 2026-01-01 03:45:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:44.823691 | orchestrator | 2026-01-01 03:45:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:44.823768 | orchestrator | 2026-01-01 03:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:47.871284 | orchestrator | 2026-01-01 03:45:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:47.872034 | orchestrator | 2026-01-01 03:45:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:47.872282 | orchestrator | 2026-01-01 03:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:50.927480 | orchestrator | 2026-01-01 03:45:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:50.928991 | orchestrator | 2026-01-01 03:45:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:50.929015 | orchestrator | 2026-01-01 03:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:53.977754 | orchestrator | 2026-01-01 03:45:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:53.978796 | orchestrator | 2026-01-01 03:45:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:53.978850 | orchestrator | 2026-01-01 03:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:57.023367 | orchestrator | 2026-01-01 03:45:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:45:57.025403 | orchestrator | 2026-01-01 03:45:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:45:57.025435 | orchestrator | 2026-01-01 03:45:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:00.075867 | orchestrator | 2026-01-01 03:46:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:00.076391 | orchestrator | 2026-01-01 03:46:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:00.076417 | orchestrator | 2026-01-01 03:46:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:03.122184 | orchestrator | 2026-01-01 03:46:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:03.124807 | orchestrator | 2026-01-01 03:46:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:03.124941 | orchestrator | 2026-01-01 03:46:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:06.180063 | orchestrator | 2026-01-01 03:46:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:06.181753 | orchestrator | 2026-01-01 03:46:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:06.181807 | orchestrator | 2026-01-01 03:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:09.231045 | orchestrator | 2026-01-01 03:46:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:09.231885 | orchestrator | 2026-01-01 03:46:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:09.232341 | orchestrator | 2026-01-01 03:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:12.286417 | orchestrator | 2026-01-01 03:46:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:12.287975 | orchestrator | 2026-01-01 03:46:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:12.288014 | orchestrator | 2026-01-01 03:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:15.331085 | orchestrator | 2026-01-01 03:46:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:15.331801 | orchestrator | 2026-01-01 03:46:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:15.331832 | orchestrator | 2026-01-01 03:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:18.374980 | orchestrator | 2026-01-01 03:46:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:18.376925 | orchestrator | 2026-01-01 03:46:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:18.377030 | orchestrator | 2026-01-01 03:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:21.418917 | orchestrator | 2026-01-01 03:46:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:21.422974 | orchestrator | 2026-01-01 03:46:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:21.423043 | orchestrator | 2026-01-01 03:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:24.464268 | orchestrator | 2026-01-01 03:46:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:24.465717 | orchestrator | 2026-01-01 03:46:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:24.465766 | orchestrator | 2026-01-01 03:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:27.514090 | orchestrator | 2026-01-01 03:46:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:27.516585 | orchestrator | 2026-01-01 03:46:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:27.516608 | orchestrator | 2026-01-01 03:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:30.559094 | orchestrator | 2026-01-01 03:46:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:30.560684 | orchestrator | 2026-01-01 03:46:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:30.560716 | orchestrator | 2026-01-01 03:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:33.607866 | orchestrator | 2026-01-01 03:46:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:33.608104 | orchestrator | 2026-01-01 03:46:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:33.608144 | orchestrator | 2026-01-01 03:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:36.658120 | orchestrator | 2026-01-01 03:46:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:36.660192 | orchestrator | 2026-01-01 03:46:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:36.660212 | orchestrator | 2026-01-01 03:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:39.714913 | orchestrator | 2026-01-01 03:46:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:39.717020 | orchestrator | 2026-01-01 03:46:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:39.717051 | orchestrator | 2026-01-01 03:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:42.775491 | orchestrator | 2026-01-01 03:46:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:42.778182 | orchestrator | 2026-01-01 03:46:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:42.778281 | orchestrator | 2026-01-01 03:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:45.831154 | orchestrator | 2026-01-01 03:46:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:45.833433 | orchestrator | 2026-01-01 03:46:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:45.833549 | orchestrator | 2026-01-01 03:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:48.873637 | orchestrator | 2026-01-01 03:46:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:48.875773 | orchestrator | 2026-01-01 03:46:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:48.875824 | orchestrator | 2026-01-01 03:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:51.925473 | orchestrator | 2026-01-01 03:46:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:51.929640 | orchestrator | 2026-01-01 03:46:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:51.930140 | orchestrator | 2026-01-01 03:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:54.977729 | orchestrator | 2026-01-01 03:46:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:54.978845 | orchestrator | 2026-01-01 03:46:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:54.978878 | orchestrator | 2026-01-01 03:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:58.022980 | orchestrator | 2026-01-01 03:46:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:46:58.025082 | orchestrator | 2026-01-01 03:46:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:46:58.025728 | orchestrator | 2026-01-01 03:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:01.074191 | orchestrator | 2026-01-01 03:47:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:01.074388 | orchestrator | 2026-01-01 03:47:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:01.074616 | orchestrator | 2026-01-01 03:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:04.120545 | orchestrator | 2026-01-01 03:47:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:04.124596 | orchestrator | 2026-01-01 03:47:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:04.124648 | orchestrator | 2026-01-01 03:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:07.178733 | orchestrator | 2026-01-01 03:47:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:07.180663 | orchestrator | 2026-01-01 03:47:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:07.180680 | orchestrator | 2026-01-01 03:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:10.232381 | orchestrator | 2026-01-01 03:47:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:10.234992 | orchestrator | 2026-01-01 03:47:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:10.235111 | orchestrator | 2026-01-01 03:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:13.280802 | orchestrator | 2026-01-01 03:47:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:13.283486 | orchestrator | 2026-01-01 03:47:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:13.283556 | orchestrator | 2026-01-01 03:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:16.332663 | orchestrator | 2026-01-01 03:47:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:16.333868 | orchestrator | 2026-01-01 03:47:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:16.333904 | orchestrator | 2026-01-01 03:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:19.385301 | orchestrator | 2026-01-01 03:47:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:19.386975 | orchestrator | 2026-01-01 03:47:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:19.387018 | orchestrator | 2026-01-01 03:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:22.443106 | orchestrator | 2026-01-01 03:47:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:22.443338 | orchestrator | 2026-01-01 03:47:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:22.443364 | orchestrator | 2026-01-01 03:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:25.490600 | orchestrator | 2026-01-01 03:47:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:25.493465 | orchestrator | 2026-01-01 03:47:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:25.493500 | orchestrator | 2026-01-01 03:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:28.536195 | orchestrator | 2026-01-01 03:47:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:28.538459 | orchestrator | 2026-01-01 03:47:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:28.538496 | orchestrator | 2026-01-01 03:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:31.579201 | orchestrator | 2026-01-01 03:47:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:31.581782 | orchestrator | 2026-01-01 03:47:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:31.581858 | orchestrator | 2026-01-01 03:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:34.642707 | orchestrator | 2026-01-01 03:47:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:34.645497 | orchestrator | 2026-01-01 03:47:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:34.645579 | orchestrator | 2026-01-01 03:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:37.692312 | orchestrator | 2026-01-01 03:47:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:37.694122 | orchestrator | 2026-01-01 03:47:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:37.694207 | orchestrator | 2026-01-01 03:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:40.745424 | orchestrator | 2026-01-01 03:47:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:40.747774 | orchestrator | 2026-01-01 03:47:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:40.748504 | orchestrator | 2026-01-01 03:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:43.800203 | orchestrator | 2026-01-01 03:47:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:43.802236 | orchestrator | 2026-01-01 03:47:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:43.802284 | orchestrator | 2026-01-01 03:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:46.850305 | orchestrator | 2026-01-01 03:47:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:46.851405 | orchestrator | 2026-01-01 03:47:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:46.851440 | orchestrator | 2026-01-01 03:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:49.895491 | orchestrator | 2026-01-01 03:47:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:49.896424 | orchestrator | 2026-01-01 03:47:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:49.896459 | orchestrator | 2026-01-01 03:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:52.945185 | orchestrator | 2026-01-01 03:47:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:52.948504 | orchestrator | 2026-01-01 03:47:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:52.948597 | orchestrator | 2026-01-01 03:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:55.994428 | orchestrator | 2026-01-01 03:47:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:55.996448 | orchestrator | 2026-01-01 03:47:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:55.996494 | orchestrator | 2026-01-01 03:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:59.041519 | orchestrator | 2026-01-01 03:47:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:47:59.044212 | orchestrator | 2026-01-01 03:47:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:47:59.044261 | orchestrator | 2026-01-01 03:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:02.081071 | orchestrator | 2026-01-01 03:48:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:02.083411 | orchestrator | 2026-01-01 03:48:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:02.083452 | orchestrator | 2026-01-01 03:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:05.128934 | orchestrator | 2026-01-01 03:48:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:05.131960 | orchestrator | 2026-01-01 03:48:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:05.131994 | orchestrator | 2026-01-01 03:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:08.185118 | orchestrator | 2026-01-01 03:48:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:08.186324 | orchestrator | 2026-01-01 03:48:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:08.186359 | orchestrator | 2026-01-01 03:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:11.233006 | orchestrator | 2026-01-01 03:48:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:11.236751 | orchestrator | 2026-01-01 03:48:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:11.236789 | orchestrator | 2026-01-01 03:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:14.279521 | orchestrator | 2026-01-01 03:48:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:14.280907 | orchestrator | 2026-01-01 03:48:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:14.280965 | orchestrator | 2026-01-01 03:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:17.328621 | orchestrator | 2026-01-01 03:48:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:17.330920 | orchestrator | 2026-01-01 03:48:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:17.330966 | orchestrator | 2026-01-01 03:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:20.380120 | orchestrator | 2026-01-01 03:48:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:20.382702 | orchestrator | 2026-01-01 03:48:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:20.382742 | orchestrator | 2026-01-01 03:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:23.436504 | orchestrator | 2026-01-01 03:48:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:23.441863 | orchestrator | 2026-01-01 03:48:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:23.442132 | orchestrator | 2026-01-01 03:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:26.493008 | orchestrator | 2026-01-01 03:48:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:26.495177 | orchestrator | 2026-01-01 03:48:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:26.495202 | orchestrator | 2026-01-01 03:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:29.553083 | orchestrator | 2026-01-01 03:48:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:29.555625 | orchestrator | 2026-01-01 03:48:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:29.555753 | orchestrator | 2026-01-01 03:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:32.599045 | orchestrator | 2026-01-01 03:48:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:32.600064 | orchestrator | 2026-01-01 03:48:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:32.600097 | orchestrator | 2026-01-01 03:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:35.647071 | orchestrator | 2026-01-01 03:48:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:35.647634 | orchestrator | 2026-01-01 03:48:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:35.647665 | orchestrator | 2026-01-01 03:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:38.692367 | orchestrator | 2026-01-01 03:48:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:38.694158 | orchestrator | 2026-01-01 03:48:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:38.694237 | orchestrator | 2026-01-01 03:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:41.744775 | orchestrator | 2026-01-01 03:48:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:41.746910 | orchestrator | 2026-01-01 03:48:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:41.747060 | orchestrator | 2026-01-01 03:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:44.798361 | orchestrator | 2026-01-01 03:48:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:44.801386 | orchestrator | 2026-01-01 03:48:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:44.801452 | orchestrator | 2026-01-01 03:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:47.850223 | orchestrator | 2026-01-01 03:48:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:47.851167 | orchestrator | 2026-01-01 03:48:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:47.851200 | orchestrator | 2026-01-01 03:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:50.890549 | orchestrator | 2026-01-01 03:48:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:50.892896 | orchestrator | 2026-01-01 03:48:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:50.892961 | orchestrator | 2026-01-01 03:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:53.940432 | orchestrator | 2026-01-01 03:48:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:53.943518 | orchestrator | 2026-01-01 03:48:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:53.943543 | orchestrator | 2026-01-01 03:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:56.997958 | orchestrator | 2026-01-01 03:48:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:48:56.999584 | orchestrator | 2026-01-01 03:48:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:48:56.999679 | orchestrator | 2026-01-01 03:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:00.042718 | orchestrator | 2026-01-01 03:49:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:00.045813 | orchestrator | 2026-01-01 03:49:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:00.046231 | orchestrator | 2026-01-01 03:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:03.093080 | orchestrator | 2026-01-01 03:49:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:03.094469 | orchestrator | 2026-01-01 03:49:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:03.094506 | orchestrator | 2026-01-01 03:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:06.148010 | orchestrator | 2026-01-01 03:49:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:06.150084 | orchestrator | 2026-01-01 03:49:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:06.150136 | orchestrator | 2026-01-01 03:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:09.188821 | orchestrator | 2026-01-01 03:49:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:09.189524 | orchestrator | 2026-01-01 03:49:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:09.189558 | orchestrator | 2026-01-01 03:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:12.237594 | orchestrator | 2026-01-01 03:49:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:12.240208 | orchestrator | 2026-01-01 03:49:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:12.240247 | orchestrator | 2026-01-01 03:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:15.282577 | orchestrator | 2026-01-01 03:49:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:15.283590 | orchestrator | 2026-01-01 03:49:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:15.283662 | orchestrator | 2026-01-01 03:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:18.334733 | orchestrator | 2026-01-01 03:49:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:18.337360 | orchestrator | 2026-01-01 03:49:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:18.337897 | orchestrator | 2026-01-01 03:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:21.384895 | orchestrator | 2026-01-01 03:49:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:21.387708 | orchestrator | 2026-01-01 03:49:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:21.387775 | orchestrator | 2026-01-01 03:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:24.431172 | orchestrator | 2026-01-01 03:49:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:24.431314 | orchestrator | 2026-01-01 03:49:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:24.431330 | orchestrator | 2026-01-01 03:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:27.472785 | orchestrator | 2026-01-01 03:49:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:27.474535 | orchestrator | 2026-01-01 03:49:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:27.474583 | orchestrator | 2026-01-01 03:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:30.524565 | orchestrator | 2026-01-01 03:49:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:30.525329 | orchestrator | 2026-01-01 03:49:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:30.525368 | orchestrator | 2026-01-01 03:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:33.576059 | orchestrator | 2026-01-01 03:49:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:33.578086 | orchestrator | 2026-01-01 03:49:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:33.578185 | orchestrator | 2026-01-01 03:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:36.623856 | orchestrator | 2026-01-01 03:49:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:36.625680 | orchestrator | 2026-01-01 03:49:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:36.625725 | orchestrator | 2026-01-01 03:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:39.676247 | orchestrator | 2026-01-01 03:49:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:39.677393 | orchestrator | 2026-01-01 03:49:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:39.677428 | orchestrator | 2026-01-01 03:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:42.723971 | orchestrator | 2026-01-01 03:49:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:42.725538 | orchestrator | 2026-01-01 03:49:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:42.725637 | orchestrator | 2026-01-01 03:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:45.772389 | orchestrator | 2026-01-01 03:49:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:45.773458 | orchestrator | 2026-01-01 03:49:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:45.773492 | orchestrator | 2026-01-01 03:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:48.818486 | orchestrator | 2026-01-01 03:49:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:48.819179 | orchestrator | 2026-01-01 03:49:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:48.819196 | orchestrator | 2026-01-01 03:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:51.862400 | orchestrator | 2026-01-01 03:49:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:51.864558 | orchestrator | 2026-01-01 03:49:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:51.864765 | orchestrator | 2026-01-01 03:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:54.912400 | orchestrator | 2026-01-01 03:49:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:54.913184 | orchestrator | 2026-01-01 03:49:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:54.913219 | orchestrator | 2026-01-01 03:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:57.966551 | orchestrator | 2026-01-01 03:49:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:49:57.969169 | orchestrator | 2026-01-01 03:49:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:49:57.969366 | orchestrator | 2026-01-01 03:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:01.036379 | orchestrator | 2026-01-01 03:50:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:01.038119 | orchestrator | 2026-01-01 03:50:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:01.038172 | orchestrator | 2026-01-01 03:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:04.077851 | orchestrator | 2026-01-01 03:50:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:04.079506 | orchestrator | 2026-01-01 03:50:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:04.079550 | orchestrator | 2026-01-01 03:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:07.117305 | orchestrator | 2026-01-01 03:50:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:07.118746 | orchestrator | 2026-01-01 03:50:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:07.118832 | orchestrator | 2026-01-01 03:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:10.166173 | orchestrator | 2026-01-01 03:50:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:10.168731 | orchestrator | 2026-01-01 03:50:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:10.168764 | orchestrator | 2026-01-01 03:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:13.210624 | orchestrator | 2026-01-01 03:50:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:13.212439 | orchestrator | 2026-01-01 03:50:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:13.212472 | orchestrator | 2026-01-01 03:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:16.266195 | orchestrator | 2026-01-01 03:50:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:16.268408 | orchestrator | 2026-01-01 03:50:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:16.268452 | orchestrator | 2026-01-01 03:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:19.319042 | orchestrator | 2026-01-01 03:50:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:19.320182 | orchestrator | 2026-01-01 03:50:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:19.320215 | orchestrator | 2026-01-01 03:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:22.367758 | orchestrator | 2026-01-01 03:50:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:22.369530 | orchestrator | 2026-01-01 03:50:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:22.369988 | orchestrator | 2026-01-01 03:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:25.413929 | orchestrator | 2026-01-01 03:50:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:25.415196 | orchestrator | 2026-01-01 03:50:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:25.415290 | orchestrator | 2026-01-01 03:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:28.464448 | orchestrator | 2026-01-01 03:50:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:28.465506 | orchestrator | 2026-01-01 03:50:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:28.465586 | orchestrator | 2026-01-01 03:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:31.501903 | orchestrator | 2026-01-01 03:50:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:31.503773 | orchestrator | 2026-01-01 03:50:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:31.503795 | orchestrator | 2026-01-01 03:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:34.560861 | orchestrator | 2026-01-01 03:50:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:34.563333 | orchestrator | 2026-01-01 03:50:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:34.563415 | orchestrator | 2026-01-01 03:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:37.612020 | orchestrator | 2026-01-01 03:50:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:37.614475 | orchestrator | 2026-01-01 03:50:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:37.614548 | orchestrator | 2026-01-01 03:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:40.660549 | orchestrator | 2026-01-01 03:50:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:40.661526 | orchestrator | 2026-01-01 03:50:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:40.661561 | orchestrator | 2026-01-01 03:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:43.710335 | orchestrator | 2026-01-01 03:50:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:43.711712 | orchestrator | 2026-01-01 03:50:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:43.711739 | orchestrator | 2026-01-01 03:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:46.754755 | orchestrator | 2026-01-01 03:50:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:46.756643 | orchestrator | 2026-01-01 03:50:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:46.756779 | orchestrator | 2026-01-01 03:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:49.813412 | orchestrator | 2026-01-01 03:50:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:49.814348 | orchestrator | 2026-01-01 03:50:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:49.814460 | orchestrator | 2026-01-01 03:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:52.861164 | orchestrator | 2026-01-01 03:50:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:52.863809 | orchestrator | 2026-01-01 03:50:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:52.863844 | orchestrator | 2026-01-01 03:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:55.905585 | orchestrator | 2026-01-01 03:50:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:55.906958 | orchestrator | 2026-01-01 03:50:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:55.907039 | orchestrator | 2026-01-01 03:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:58.963890 | orchestrator | 2026-01-01 03:50:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:50:58.966667 | orchestrator | 2026-01-01 03:50:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:50:58.966794 | orchestrator | 2026-01-01 03:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:02.016147 | orchestrator | 2026-01-01 03:51:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:02.019278 | orchestrator | 2026-01-01 03:51:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:02.019313 | orchestrator | 2026-01-01 03:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:05.061148 | orchestrator | 2026-01-01 03:51:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:05.063168 | orchestrator | 2026-01-01 03:51:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:05.063200 | orchestrator | 2026-01-01 03:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:08.107062 | orchestrator | 2026-01-01 03:51:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:08.109648 | orchestrator | 2026-01-01 03:51:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:08.109754 | orchestrator | 2026-01-01 03:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:11.155562 | orchestrator | 2026-01-01 03:51:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:11.158294 | orchestrator | 2026-01-01 03:51:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:11.158334 | orchestrator | 2026-01-01 03:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:14.206882 | orchestrator | 2026-01-01 03:51:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:14.209119 | orchestrator | 2026-01-01 03:51:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:14.209156 | orchestrator | 2026-01-01 03:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:17.256761 | orchestrator | 2026-01-01 03:51:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:17.258105 | orchestrator | 2026-01-01 03:51:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:17.258141 | orchestrator | 2026-01-01 03:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:20.295756 | orchestrator | 2026-01-01 03:51:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:20.296392 | orchestrator | 2026-01-01 03:51:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:20.296523 | orchestrator | 2026-01-01 03:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:23.345853 | orchestrator | 2026-01-01 03:51:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:23.348171 | orchestrator | 2026-01-01 03:51:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:23.348221 | orchestrator | 2026-01-01 03:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:26.386781 | orchestrator | 2026-01-01 03:51:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:26.387850 | orchestrator | 2026-01-01 03:51:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:26.387884 | orchestrator | 2026-01-01 03:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:29.438385 | orchestrator | 2026-01-01 03:51:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:29.439967 | orchestrator | 2026-01-01 03:51:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:29.440035 | orchestrator | 2026-01-01 03:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:32.487614 | orchestrator | 2026-01-01 03:51:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:32.490536 | orchestrator | 2026-01-01 03:51:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:32.490593 | orchestrator | 2026-01-01 03:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:35.542874 | orchestrator | 2026-01-01 03:51:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:35.544798 | orchestrator | 2026-01-01 03:51:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:35.544924 | orchestrator | 2026-01-01 03:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:38.596708 | orchestrator | 2026-01-01 03:51:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:38.597272 | orchestrator | 2026-01-01 03:51:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:38.597300 | orchestrator | 2026-01-01 03:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:41.647034 | orchestrator | 2026-01-01 03:51:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:41.650426 | orchestrator | 2026-01-01 03:51:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:41.650550 | orchestrator | 2026-01-01 03:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:44.700455 | orchestrator | 2026-01-01 03:51:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:44.703114 | orchestrator | 2026-01-01 03:51:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:44.703161 | orchestrator | 2026-01-01 03:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:47.747671 | orchestrator | 2026-01-01 03:51:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:47.749946 | orchestrator | 2026-01-01 03:51:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:47.749964 | orchestrator | 2026-01-01 03:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:50.798630 | orchestrator | 2026-01-01 03:51:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:50.800507 | orchestrator | 2026-01-01 03:51:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:50.800554 | orchestrator | 2026-01-01 03:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:53.852919 | orchestrator | 2026-01-01 03:51:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:53.855633 | orchestrator | 2026-01-01 03:51:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:53.855662 | orchestrator | 2026-01-01 03:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:56.903731 | orchestrator | 2026-01-01 03:51:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:56.907164 | orchestrator | 2026-01-01 03:51:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:56.907265 | orchestrator | 2026-01-01 03:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:59.963183 | orchestrator | 2026-01-01 03:51:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:51:59.965188 | orchestrator | 2026-01-01 03:51:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:51:59.965224 | orchestrator | 2026-01-01 03:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:03.015205 | orchestrator | 2026-01-01 03:52:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:03.018209 | orchestrator | 2026-01-01 03:52:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:03.018256 | orchestrator | 2026-01-01 03:52:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:06.071793 | orchestrator | 2026-01-01 03:52:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:06.073338 | orchestrator | 2026-01-01 03:52:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:06.073490 | orchestrator | 2026-01-01 03:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:09.120240 | orchestrator | 2026-01-01 03:52:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:09.121671 | orchestrator | 2026-01-01 03:52:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:09.121806 | orchestrator | 2026-01-01 03:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:12.167304 | orchestrator | 2026-01-01 03:52:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:12.167705 | orchestrator | 2026-01-01 03:52:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:12.167740 | orchestrator | 2026-01-01 03:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:15.222231 | orchestrator | 2026-01-01 03:52:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:15.223692 | orchestrator | 2026-01-01 03:52:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:15.223743 | orchestrator | 2026-01-01 03:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:18.278248 | orchestrator | 2026-01-01 03:52:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:18.278947 | orchestrator | 2026-01-01 03:52:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:18.279039 | orchestrator | 2026-01-01 03:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:21.317319 | orchestrator | 2026-01-01 03:52:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:21.319802 | orchestrator | 2026-01-01 03:52:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:21.319830 | orchestrator | 2026-01-01 03:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:24.374338 | orchestrator | 2026-01-01 03:52:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:24.376275 | orchestrator | 2026-01-01 03:52:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:24.376317 | orchestrator | 2026-01-01 03:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:27.431721 | orchestrator | 2026-01-01 03:52:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:27.434241 | orchestrator | 2026-01-01 03:52:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:27.434340 | orchestrator | 2026-01-01 03:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:30.482329 | orchestrator | 2026-01-01 03:52:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:30.483400 | orchestrator | 2026-01-01 03:52:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:30.483481 | orchestrator | 2026-01-01 03:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:33.533811 | orchestrator | 2026-01-01 03:52:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:33.534725 | orchestrator | 2026-01-01 03:52:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:33.534752 | orchestrator | 2026-01-01 03:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:36.581465 | orchestrator | 2026-01-01 03:52:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:36.582961 | orchestrator | 2026-01-01 03:52:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:36.582989 | orchestrator | 2026-01-01 03:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:39.636413 | orchestrator | 2026-01-01 03:52:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:39.637210 | orchestrator | 2026-01-01 03:52:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:39.637238 | orchestrator | 2026-01-01 03:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:42.688294 | orchestrator | 2026-01-01 03:52:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:42.690370 | orchestrator | 2026-01-01 03:52:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:42.690392 | orchestrator | 2026-01-01 03:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:45.732715 | orchestrator | 2026-01-01 03:52:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:45.735013 | orchestrator | 2026-01-01 03:52:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:45.735040 | orchestrator | 2026-01-01 03:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:48.783272 | orchestrator | 2026-01-01 03:52:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:48.785140 | orchestrator | 2026-01-01 03:52:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:48.785271 | orchestrator | 2026-01-01 03:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:51.847007 | orchestrator | 2026-01-01 03:52:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:51.849270 | orchestrator | 2026-01-01 03:52:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:51.849306 | orchestrator | 2026-01-01 03:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:54.898911 | orchestrator | 2026-01-01 03:52:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:54.900943 | orchestrator | 2026-01-01 03:52:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:54.901029 | orchestrator | 2026-01-01 03:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:57.954311 | orchestrator | 2026-01-01 03:52:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:52:57.956174 | orchestrator | 2026-01-01 03:52:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:52:57.956207 | orchestrator | 2026-01-01 03:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:01.005679 | orchestrator | 2026-01-01 03:53:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:01.008758 | orchestrator | 2026-01-01 03:53:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:01.008826 | orchestrator | 2026-01-01 03:53:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:04.048159 | orchestrator | 2026-01-01 03:53:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:04.050751 | orchestrator | 2026-01-01 03:53:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:04.050803 | orchestrator | 2026-01-01 03:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:07.087135 | orchestrator | 2026-01-01 03:53:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:07.087314 | orchestrator | 2026-01-01 03:53:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:07.087334 | orchestrator | 2026-01-01 03:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:10.131480 | orchestrator | 2026-01-01 03:53:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:10.135681 | orchestrator | 2026-01-01 03:53:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:10.135771 | orchestrator | 2026-01-01 03:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:13.184764 | orchestrator | 2026-01-01 03:53:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:13.187187 | orchestrator | 2026-01-01 03:53:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:13.187256 | orchestrator | 2026-01-01 03:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:16.244041 | orchestrator | 2026-01-01 03:53:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:16.246304 | orchestrator | 2026-01-01 03:53:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:16.246362 | orchestrator | 2026-01-01 03:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:19.290895 | orchestrator | 2026-01-01 03:53:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:19.292403 | orchestrator | 2026-01-01 03:53:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:19.292424 | orchestrator | 2026-01-01 03:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:22.338558 | orchestrator | 2026-01-01 03:53:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:22.341133 | orchestrator | 2026-01-01 03:53:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:22.341166 | orchestrator | 2026-01-01 03:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:25.380987 | orchestrator | 2026-01-01 03:53:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:25.383672 | orchestrator | 2026-01-01 03:53:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:25.383718 | orchestrator | 2026-01-01 03:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:28.426279 | orchestrator | 2026-01-01 03:53:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:28.427486 | orchestrator | 2026-01-01 03:53:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:28.427515 | orchestrator | 2026-01-01 03:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:31.468419 | orchestrator | 2026-01-01 03:53:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:31.470290 | orchestrator | 2026-01-01 03:53:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:31.470353 | orchestrator | 2026-01-01 03:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:34.516239 | orchestrator | 2026-01-01 03:53:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:34.516370 | orchestrator | 2026-01-01 03:53:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:34.516388 | orchestrator | 2026-01-01 03:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:37.563262 | orchestrator | 2026-01-01 03:53:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:37.565339 | orchestrator | 2026-01-01 03:53:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:37.565739 | orchestrator | 2026-01-01 03:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:40.605077 | orchestrator | 2026-01-01 03:53:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:40.606427 | orchestrator | 2026-01-01 03:53:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:40.606466 | orchestrator | 2026-01-01 03:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:43.651612 | orchestrator | 2026-01-01 03:53:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:43.652272 | orchestrator | 2026-01-01 03:53:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:43.652308 | orchestrator | 2026-01-01 03:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:46.693144 | orchestrator | 2026-01-01 03:53:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:46.695486 | orchestrator | 2026-01-01 03:53:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:46.695571 | orchestrator | 2026-01-01 03:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:49.741247 | orchestrator | 2026-01-01 03:53:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:49.741921 | orchestrator | 2026-01-01 03:53:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:49.741957 | orchestrator | 2026-01-01 03:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:52.792848 | orchestrator | 2026-01-01 03:53:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:52.794639 | orchestrator | 2026-01-01 03:53:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:52.794666 | orchestrator | 2026-01-01 03:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:55.850243 | orchestrator | 2026-01-01 03:53:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:55.853120 | orchestrator | 2026-01-01 03:53:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:55.853156 | orchestrator | 2026-01-01 03:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:58.904733 | orchestrator | 2026-01-01 03:53:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:53:58.906004 | orchestrator | 2026-01-01 03:53:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:53:58.906080 | orchestrator | 2026-01-01 03:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:01.956020 | orchestrator | 2026-01-01 03:54:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:01.957497 | orchestrator | 2026-01-01 03:54:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:01.957592 | orchestrator | 2026-01-01 03:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:05.015793 | orchestrator | 2026-01-01 03:54:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:05.018886 | orchestrator | 2026-01-01 03:54:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:05.018947 | orchestrator | 2026-01-01 03:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:08.064988 | orchestrator | 2026-01-01 03:54:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:08.068188 | orchestrator | 2026-01-01 03:54:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:08.068233 | orchestrator | 2026-01-01 03:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:11.104700 | orchestrator | 2026-01-01 03:54:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:11.106208 | orchestrator | 2026-01-01 03:54:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:11.106241 | orchestrator | 2026-01-01 03:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:14.153474 | orchestrator | 2026-01-01 03:54:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:14.155822 | orchestrator | 2026-01-01 03:54:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:14.155943 | orchestrator | 2026-01-01 03:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:17.213299 | orchestrator | 2026-01-01 03:54:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:17.217104 | orchestrator | 2026-01-01 03:54:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:17.217142 | orchestrator | 2026-01-01 03:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:20.259272 | orchestrator | 2026-01-01 03:54:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:20.260755 | orchestrator | 2026-01-01 03:54:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:20.260782 | orchestrator | 2026-01-01 03:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:23.315659 | orchestrator | 2026-01-01 03:54:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:23.317371 | orchestrator | 2026-01-01 03:54:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:23.317423 | orchestrator | 2026-01-01 03:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:26.383748 | orchestrator | 2026-01-01 03:54:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:26.386480 | orchestrator | 2026-01-01 03:54:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:26.386510 | orchestrator | 2026-01-01 03:54:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:29.430687 | orchestrator | 2026-01-01 03:54:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:29.431915 | orchestrator | 2026-01-01 03:54:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:29.431943 | orchestrator | 2026-01-01 03:54:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:32.477079 | orchestrator | 2026-01-01 03:54:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:32.478579 | orchestrator | 2026-01-01 03:54:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:32.478679 | orchestrator | 2026-01-01 03:54:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:35.518806 | orchestrator | 2026-01-01 03:54:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:35.519952 | orchestrator | 2026-01-01 03:54:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:35.519999 | orchestrator | 2026-01-01 03:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:38.572176 | orchestrator | 2026-01-01 03:54:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:38.573918 | orchestrator | 2026-01-01 03:54:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:38.573998 | orchestrator | 2026-01-01 03:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:41.620049 | orchestrator | 2026-01-01 03:54:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:41.622380 | orchestrator | 2026-01-01 03:54:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:41.622436 | orchestrator | 2026-01-01 03:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:44.679507 | orchestrator | 2026-01-01 03:54:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:44.682003 | orchestrator | 2026-01-01 03:54:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:44.682146 | orchestrator | 2026-01-01 03:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:47.724408 | orchestrator | 2026-01-01 03:54:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:47.725010 | orchestrator | 2026-01-01 03:54:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:47.725039 | orchestrator | 2026-01-01 03:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:50.761549 | orchestrator | 2026-01-01 03:54:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:50.764670 | orchestrator | 2026-01-01 03:54:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:50.764815 | orchestrator | 2026-01-01 03:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:53.815641 | orchestrator | 2026-01-01 03:54:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:53.817314 | orchestrator | 2026-01-01 03:54:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:53.817353 | orchestrator | 2026-01-01 03:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:56.870417 | orchestrator | 2026-01-01 03:54:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:56.871647 | orchestrator | 2026-01-01 03:54:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:56.871702 | orchestrator | 2026-01-01 03:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:59.913852 | orchestrator | 2026-01-01 03:54:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:54:59.916123 | orchestrator | 2026-01-01 03:54:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:54:59.916511 | orchestrator | 2026-01-01 03:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:02.947747 | orchestrator | 2026-01-01 03:55:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:02.948680 | orchestrator | 2026-01-01 03:55:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:02.948764 | orchestrator | 2026-01-01 03:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:05.993658 | orchestrator | 2026-01-01 03:55:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:05.994555 | orchestrator | 2026-01-01 03:55:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:05.994591 | orchestrator | 2026-01-01 03:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:09.045518 | orchestrator | 2026-01-01 03:55:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:09.049245 | orchestrator | 2026-01-01 03:55:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:09.049935 | orchestrator | 2026-01-01 03:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:12.097507 | orchestrator | 2026-01-01 03:55:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:12.098494 | orchestrator | 2026-01-01 03:55:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:12.098525 | orchestrator | 2026-01-01 03:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:15.154150 | orchestrator | 2026-01-01 03:55:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:15.157383 | orchestrator | 2026-01-01 03:55:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:15.157528 | orchestrator | 2026-01-01 03:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:18.206069 | orchestrator | 2026-01-01 03:55:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:18.206187 | orchestrator | 2026-01-01 03:55:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:18.206204 | orchestrator | 2026-01-01 03:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:21.253465 | orchestrator | 2026-01-01 03:55:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:21.255684 | orchestrator | 2026-01-01 03:55:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:21.255722 | orchestrator | 2026-01-01 03:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:24.302360 | orchestrator | 2026-01-01 03:55:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:24.306105 | orchestrator | 2026-01-01 03:55:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:24.306150 | orchestrator | 2026-01-01 03:55:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:27.361370 | orchestrator | 2026-01-01 03:55:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:27.364766 | orchestrator | 2026-01-01 03:55:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:27.364821 | orchestrator | 2026-01-01 03:55:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:30.418994 | orchestrator | 2026-01-01 03:55:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:30.422087 | orchestrator | 2026-01-01 03:55:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:30.422174 | orchestrator | 2026-01-01 03:55:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:33.469745 | orchestrator | 2026-01-01 03:55:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:33.471099 | orchestrator | 2026-01-01 03:55:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:33.471124 | orchestrator | 2026-01-01 03:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:36.517892 | orchestrator | 2026-01-01 03:55:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:36.522099 | orchestrator | 2026-01-01 03:55:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:36.522139 | orchestrator | 2026-01-01 03:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:39.577572 | orchestrator | 2026-01-01 03:55:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:39.580289 | orchestrator | 2026-01-01 03:55:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:39.580328 | orchestrator | 2026-01-01 03:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:42.629297 | orchestrator | 2026-01-01 03:55:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:42.632441 | orchestrator | 2026-01-01 03:55:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:42.632478 | orchestrator | 2026-01-01 03:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:45.680764 | orchestrator | 2026-01-01 03:55:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:45.684660 | orchestrator | 2026-01-01 03:55:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:45.684751 | orchestrator | 2026-01-01 03:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:48.733172 | orchestrator | 2026-01-01 03:55:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:48.736136 | orchestrator | 2026-01-01 03:55:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:48.736358 | orchestrator | 2026-01-01 03:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:51.790002 | orchestrator | 2026-01-01 03:55:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:51.791331 | orchestrator | 2026-01-01 03:55:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:51.791388 | orchestrator | 2026-01-01 03:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:54.836151 | orchestrator | 2026-01-01 03:55:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:54.839286 | orchestrator | 2026-01-01 03:55:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:54.839361 | orchestrator | 2026-01-01 03:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:57.883640 | orchestrator | 2026-01-01 03:55:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:55:57.886789 | orchestrator | 2026-01-01 03:55:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:55:57.886831 | orchestrator | 2026-01-01 03:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:00.930122 | orchestrator | 2026-01-01 03:56:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:00.933178 | orchestrator | 2026-01-01 03:56:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:00.933211 | orchestrator | 2026-01-01 03:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:03.970509 | orchestrator | 2026-01-01 03:56:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:03.972258 | orchestrator | 2026-01-01 03:56:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:03.972293 | orchestrator | 2026-01-01 03:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:07.017356 | orchestrator | 2026-01-01 03:56:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:07.018312 | orchestrator | 2026-01-01 03:56:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:07.018398 | orchestrator | 2026-01-01 03:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:10.072686 | orchestrator | 2026-01-01 03:56:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:10.075191 | orchestrator | 2026-01-01 03:56:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:10.075275 | orchestrator | 2026-01-01 03:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:13.121495 | orchestrator | 2026-01-01 03:56:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:13.124353 | orchestrator | 2026-01-01 03:56:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:13.124389 | orchestrator | 2026-01-01 03:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:16.182458 | orchestrator | 2026-01-01 03:56:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:16.184315 | orchestrator | 2026-01-01 03:56:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:16.184346 | orchestrator | 2026-01-01 03:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:19.237536 | orchestrator | 2026-01-01 03:56:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:19.240528 | orchestrator | 2026-01-01 03:56:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:19.240565 | orchestrator | 2026-01-01 03:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:22.289681 | orchestrator | 2026-01-01 03:56:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:22.292604 | orchestrator | 2026-01-01 03:56:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:22.292657 | orchestrator | 2026-01-01 03:56:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:25.344533 | orchestrator | 2026-01-01 03:56:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:25.345589 | orchestrator | 2026-01-01 03:56:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:25.345623 | orchestrator | 2026-01-01 03:56:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:28.392423 | orchestrator | 2026-01-01 03:56:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:28.394230 | orchestrator | 2026-01-01 03:56:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:28.394267 | orchestrator | 2026-01-01 03:56:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:31.441213 | orchestrator | 2026-01-01 03:56:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:31.443674 | orchestrator | 2026-01-01 03:56:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:31.443706 | orchestrator | 2026-01-01 03:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:34.490371 | orchestrator | 2026-01-01 03:56:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:34.491792 | orchestrator | 2026-01-01 03:56:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:34.491816 | orchestrator | 2026-01-01 03:56:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:37.535284 | orchestrator | 2026-01-01 03:56:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:37.537101 | orchestrator | 2026-01-01 03:56:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:37.537153 | orchestrator | 2026-01-01 03:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:40.570496 | orchestrator | 2026-01-01 03:56:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:40.571572 | orchestrator | 2026-01-01 03:56:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:40.571619 | orchestrator | 2026-01-01 03:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:43.616744 | orchestrator | 2026-01-01 03:56:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:43.619207 | orchestrator | 2026-01-01 03:56:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:43.619245 | orchestrator | 2026-01-01 03:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:46.664897 | orchestrator | 2026-01-01 03:56:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:46.666523 | orchestrator | 2026-01-01 03:56:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:46.667013 | orchestrator | 2026-01-01 03:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:49.721321 | orchestrator | 2026-01-01 03:56:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:49.722477 | orchestrator | 2026-01-01 03:56:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:49.722522 | orchestrator | 2026-01-01 03:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:52.768333 | orchestrator | 2026-01-01 03:56:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:52.770646 | orchestrator | 2026-01-01 03:56:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:52.770927 | orchestrator | 2026-01-01 03:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:55.810304 | orchestrator | 2026-01-01 03:56:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:55.811478 | orchestrator | 2026-01-01 03:56:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:55.811561 | orchestrator | 2026-01-01 03:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:58.859932 | orchestrator | 2026-01-01 03:56:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:56:58.863232 | orchestrator | 2026-01-01 03:56:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:56:58.863270 | orchestrator | 2026-01-01 03:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:01.918585 | orchestrator | 2026-01-01 03:57:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:01.919340 | orchestrator | 2026-01-01 03:57:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:01.919376 | orchestrator | 2026-01-01 03:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:04.969787 | orchestrator | 2026-01-01 03:57:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:04.971654 | orchestrator | 2026-01-01 03:57:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:04.971727 | orchestrator | 2026-01-01 03:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:08.015214 | orchestrator | 2026-01-01 03:57:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:08.016519 | orchestrator | 2026-01-01 03:57:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:08.016548 | orchestrator | 2026-01-01 03:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:11.079611 | orchestrator | 2026-01-01 03:57:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:11.080699 | orchestrator | 2026-01-01 03:57:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:11.080731 | orchestrator | 2026-01-01 03:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:14.119343 | orchestrator | 2026-01-01 03:57:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:14.121617 | orchestrator | 2026-01-01 03:57:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:14.121656 | orchestrator | 2026-01-01 03:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:17.165185 | orchestrator | 2026-01-01 03:57:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:17.166517 | orchestrator | 2026-01-01 03:57:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:17.166558 | orchestrator | 2026-01-01 03:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:20.220129 | orchestrator | 2026-01-01 03:57:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:20.222015 | orchestrator | 2026-01-01 03:57:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:20.222104 | orchestrator | 2026-01-01 03:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:23.277226 | orchestrator | 2026-01-01 03:57:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:23.280499 | orchestrator | 2026-01-01 03:57:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:23.281074 | orchestrator | 2026-01-01 03:57:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:26.324401 | orchestrator | 2026-01-01 03:57:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:26.325861 | orchestrator | 2026-01-01 03:57:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:26.325978 | orchestrator | 2026-01-01 03:57:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:29.374663 | orchestrator | 2026-01-01 03:57:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:29.376535 | orchestrator | 2026-01-01 03:57:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:29.376646 | orchestrator | 2026-01-01 03:57:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:32.420756 | orchestrator | 2026-01-01 03:57:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:32.422510 | orchestrator | 2026-01-01 03:57:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:32.422605 | orchestrator | 2026-01-01 03:57:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:35.476144 | orchestrator | 2026-01-01 03:57:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:35.477956 | orchestrator | 2026-01-01 03:57:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:35.478173 | orchestrator | 2026-01-01 03:57:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:38.534150 | orchestrator | 2026-01-01 03:57:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:38.536058 | orchestrator | 2026-01-01 03:57:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:38.536140 | orchestrator | 2026-01-01 03:57:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:41.583046 | orchestrator | 2026-01-01 03:57:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:41.585284 | orchestrator | 2026-01-01 03:57:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:41.585320 | orchestrator | 2026-01-01 03:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:44.639572 | orchestrator | 2026-01-01 03:57:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:44.642113 | orchestrator | 2026-01-01 03:57:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:44.642231 | orchestrator | 2026-01-01 03:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:47.694455 | orchestrator | 2026-01-01 03:57:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:47.695773 | orchestrator | 2026-01-01 03:57:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:47.695855 | orchestrator | 2026-01-01 03:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:50.746929 | orchestrator | 2026-01-01 03:57:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:50.749342 | orchestrator | 2026-01-01 03:57:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:50.749367 | orchestrator | 2026-01-01 03:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:53.798723 | orchestrator | 2026-01-01 03:57:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:53.801881 | orchestrator | 2026-01-01 03:57:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:53.801948 | orchestrator | 2026-01-01 03:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:56.850527 | orchestrator | 2026-01-01 03:57:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:56.852912 | orchestrator | 2026-01-01 03:57:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:56.852946 | orchestrator | 2026-01-01 03:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:59.899801 | orchestrator | 2026-01-01 03:57:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:57:59.903254 | orchestrator | 2026-01-01 03:57:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:57:59.903342 | orchestrator | 2026-01-01 03:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:02.961877 | orchestrator | 2026-01-01 03:58:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:02.964316 | orchestrator | 2026-01-01 03:58:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:02.964353 | orchestrator | 2026-01-01 03:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:06.013682 | orchestrator | 2026-01-01 03:58:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:06.015347 | orchestrator | 2026-01-01 03:58:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:06.015379 | orchestrator | 2026-01-01 03:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:09.065765 | orchestrator | 2026-01-01 03:58:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:09.068120 | orchestrator | 2026-01-01 03:58:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:09.068150 | orchestrator | 2026-01-01 03:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:12.107795 | orchestrator | 2026-01-01 03:58:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:12.109625 | orchestrator | 2026-01-01 03:58:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:12.109656 | orchestrator | 2026-01-01 03:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:15.158296 | orchestrator | 2026-01-01 03:58:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:15.159948 | orchestrator | 2026-01-01 03:58:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:15.159992 | orchestrator | 2026-01-01 03:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:18.208553 | orchestrator | 2026-01-01 03:58:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:18.211905 | orchestrator | 2026-01-01 03:58:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:18.211945 | orchestrator | 2026-01-01 03:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:21.260927 | orchestrator | 2026-01-01 03:58:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:21.261949 | orchestrator | 2026-01-01 03:58:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:21.261997 | orchestrator | 2026-01-01 03:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:24.311748 | orchestrator | 2026-01-01 03:58:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:24.313004 | orchestrator | 2026-01-01 03:58:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:24.313148 | orchestrator | 2026-01-01 03:58:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:27.364913 | orchestrator | 2026-01-01 03:58:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:27.367935 | orchestrator | 2026-01-01 03:58:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:27.367980 | orchestrator | 2026-01-01 03:58:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:30.410532 | orchestrator | 2026-01-01 03:58:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:30.411489 | orchestrator | 2026-01-01 03:58:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:30.411529 | orchestrator | 2026-01-01 03:58:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:33.455243 | orchestrator | 2026-01-01 03:58:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:33.458109 | orchestrator | 2026-01-01 03:58:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:33.458346 | orchestrator | 2026-01-01 03:58:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:36.500706 | orchestrator | 2026-01-01 03:58:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:36.501460 | orchestrator | 2026-01-01 03:58:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:36.501487 | orchestrator | 2026-01-01 03:58:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:39.554992 | orchestrator | 2026-01-01 03:58:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:39.556918 | orchestrator | 2026-01-01 03:58:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:39.556942 | orchestrator | 2026-01-01 03:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:42.597984 | orchestrator | 2026-01-01 03:58:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:42.600212 | orchestrator | 2026-01-01 03:58:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:42.600242 | orchestrator | 2026-01-01 03:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:45.650265 | orchestrator | 2026-01-01 03:58:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:45.653375 | orchestrator | 2026-01-01 03:58:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:45.653401 | orchestrator | 2026-01-01 03:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:48.706330 | orchestrator | 2026-01-01 03:58:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:48.709301 | orchestrator | 2026-01-01 03:58:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:48.709348 | orchestrator | 2026-01-01 03:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:51.770199 | orchestrator | 2026-01-01 03:58:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:51.772426 | orchestrator | 2026-01-01 03:58:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:51.772454 | orchestrator | 2026-01-01 03:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:54.821710 | orchestrator | 2026-01-01 03:58:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:54.823825 | orchestrator | 2026-01-01 03:58:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:54.823857 | orchestrator | 2026-01-01 03:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:57.867216 | orchestrator | 2026-01-01 03:58:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:58:57.868790 | orchestrator | 2026-01-01 03:58:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:58:57.868827 | orchestrator | 2026-01-01 03:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:00.914240 | orchestrator | 2026-01-01 03:59:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:00.915900 | orchestrator | 2026-01-01 03:59:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:00.915927 | orchestrator | 2026-01-01 03:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:03.957598 | orchestrator | 2026-01-01 03:59:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:03.959142 | orchestrator | 2026-01-01 03:59:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:03.959198 | orchestrator | 2026-01-01 03:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:07.001650 | orchestrator | 2026-01-01 03:59:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:07.004042 | orchestrator | 2026-01-01 03:59:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:07.004159 | orchestrator | 2026-01-01 03:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:10.051475 | orchestrator | 2026-01-01 03:59:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:10.056195 | orchestrator | 2026-01-01 03:59:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:10.056231 | orchestrator | 2026-01-01 03:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:13.096094 | orchestrator | 2026-01-01 03:59:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:13.098490 | orchestrator | 2026-01-01 03:59:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:13.098517 | orchestrator | 2026-01-01 03:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:16.143915 | orchestrator | 2026-01-01 03:59:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:16.145447 | orchestrator | 2026-01-01 03:59:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:16.145478 | orchestrator | 2026-01-01 03:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:19.194877 | orchestrator | 2026-01-01 03:59:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:19.195938 | orchestrator | 2026-01-01 03:59:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:19.196150 | orchestrator | 2026-01-01 03:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:22.254224 | orchestrator | 2026-01-01 03:59:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:22.256478 | orchestrator | 2026-01-01 03:59:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:22.256513 | orchestrator | 2026-01-01 03:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:25.302582 | orchestrator | 2026-01-01 03:59:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:25.303956 | orchestrator | 2026-01-01 03:59:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:25.304552 | orchestrator | 2026-01-01 03:59:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:28.358146 | orchestrator | 2026-01-01 03:59:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:28.360718 | orchestrator | 2026-01-01 03:59:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:28.360744 | orchestrator | 2026-01-01 03:59:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:31.414692 | orchestrator | 2026-01-01 03:59:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:31.417492 | orchestrator | 2026-01-01 03:59:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:31.417519 | orchestrator | 2026-01-01 03:59:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:34.469623 | orchestrator | 2026-01-01 03:59:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:34.470892 | orchestrator | 2026-01-01 03:59:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:34.470927 | orchestrator | 2026-01-01 03:59:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:37.514280 | orchestrator | 2026-01-01 03:59:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:37.514944 | orchestrator | 2026-01-01 03:59:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:37.514969 | orchestrator | 2026-01-01 03:59:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:40.553954 | orchestrator | 2026-01-01 03:59:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:40.554700 | orchestrator | 2026-01-01 03:59:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:40.554738 | orchestrator | 2026-01-01 03:59:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:43.599803 | orchestrator | 2026-01-01 03:59:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:43.600863 | orchestrator | 2026-01-01 03:59:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:43.600935 | orchestrator | 2026-01-01 03:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:46.646516 | orchestrator | 2026-01-01 03:59:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:46.649520 | orchestrator | 2026-01-01 03:59:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:46.649589 | orchestrator | 2026-01-01 03:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:49.700085 | orchestrator | 2026-01-01 03:59:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:49.702599 | orchestrator | 2026-01-01 03:59:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:49.702778 | orchestrator | 2026-01-01 03:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:52.750596 | orchestrator | 2026-01-01 03:59:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:52.753421 | orchestrator | 2026-01-01 03:59:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:52.753452 | orchestrator | 2026-01-01 03:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:55.797685 | orchestrator | 2026-01-01 03:59:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:55.799368 | orchestrator | 2026-01-01 03:59:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:55.799463 | orchestrator | 2026-01-01 03:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:58.855163 | orchestrator | 2026-01-01 03:59:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 03:59:58.856631 | orchestrator | 2026-01-01 03:59:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 03:59:58.856672 | orchestrator | 2026-01-01 03:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:01.910159 | orchestrator | 2026-01-01 04:00:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:01.912704 | orchestrator | 2026-01-01 04:00:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:01.912734 | orchestrator | 2026-01-01 04:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:04.964455 | orchestrator | 2026-01-01 04:00:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:04.966665 | orchestrator | 2026-01-01 04:00:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:04.967109 | orchestrator | 2026-01-01 04:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:08.015982 | orchestrator | 2026-01-01 04:00:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:08.018556 | orchestrator | 2026-01-01 04:00:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:08.019098 | orchestrator | 2026-01-01 04:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:11.077906 | orchestrator | 2026-01-01 04:00:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:11.079740 | orchestrator | 2026-01-01 04:00:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:11.079788 | orchestrator | 2026-01-01 04:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:14.119611 | orchestrator | 2026-01-01 04:00:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:14.120508 | orchestrator | 2026-01-01 04:00:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:14.120541 | orchestrator | 2026-01-01 04:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:17.166412 | orchestrator | 2026-01-01 04:00:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:17.167284 | orchestrator | 2026-01-01 04:00:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:17.167329 | orchestrator | 2026-01-01 04:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:20.212674 | orchestrator | 2026-01-01 04:00:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:20.213956 | orchestrator | 2026-01-01 04:00:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:20.213997 | orchestrator | 2026-01-01 04:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:23.269322 | orchestrator | 2026-01-01 04:00:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:23.270357 | orchestrator | 2026-01-01 04:00:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:23.270432 | orchestrator | 2026-01-01 04:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:26.314886 | orchestrator | 2026-01-01 04:00:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:26.315972 | orchestrator | 2026-01-01 04:00:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:26.316007 | orchestrator | 2026-01-01 04:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:29.365562 | orchestrator | 2026-01-01 04:00:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:29.366729 | orchestrator | 2026-01-01 04:00:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:29.367017 | orchestrator | 2026-01-01 04:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:32.408964 | orchestrator | 2026-01-01 04:00:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:32.410923 | orchestrator | 2026-01-01 04:00:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:32.411572 | orchestrator | 2026-01-01 04:00:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:35.462385 | orchestrator | 2026-01-01 04:00:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:35.463315 | orchestrator | 2026-01-01 04:00:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:35.463347 | orchestrator | 2026-01-01 04:00:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:38.521291 | orchestrator | 2026-01-01 04:00:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:38.523100 | orchestrator | 2026-01-01 04:00:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:38.523144 | orchestrator | 2026-01-01 04:00:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:41.565042 | orchestrator | 2026-01-01 04:00:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:41.567127 | orchestrator | 2026-01-01 04:00:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:41.567161 | orchestrator | 2026-01-01 04:00:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:44.617121 | orchestrator | 2026-01-01 04:00:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:44.619635 | orchestrator | 2026-01-01 04:00:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:44.619679 | orchestrator | 2026-01-01 04:00:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:47.672142 | orchestrator | 2026-01-01 04:00:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:47.674383 | orchestrator | 2026-01-01 04:00:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:47.674415 | orchestrator | 2026-01-01 04:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:50.732781 | orchestrator | 2026-01-01 04:00:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:50.735538 | orchestrator | 2026-01-01 04:00:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:50.735566 | orchestrator | 2026-01-01 04:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:53.774768 | orchestrator | 2026-01-01 04:00:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:53.776701 | orchestrator | 2026-01-01 04:00:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:53.777132 | orchestrator | 2026-01-01 04:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:56.824874 | orchestrator | 2026-01-01 04:00:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:56.825353 | orchestrator | 2026-01-01 04:00:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:56.825378 | orchestrator | 2026-01-01 04:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:59.869539 | orchestrator | 2026-01-01 04:00:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:00:59.873183 | orchestrator | 2026-01-01 04:00:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:00:59.874106 | orchestrator | 2026-01-01 04:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:02.920462 | orchestrator | 2026-01-01 04:01:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:02.921884 | orchestrator | 2026-01-01 04:01:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:02.921987 | orchestrator | 2026-01-01 04:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:05.972484 | orchestrator | 2026-01-01 04:01:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:05.974108 | orchestrator | 2026-01-01 04:01:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:05.974146 | orchestrator | 2026-01-01 04:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:09.023807 | orchestrator | 2026-01-01 04:01:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:09.025802 | orchestrator | 2026-01-01 04:01:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:09.025867 | orchestrator | 2026-01-01 04:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:12.069535 | orchestrator | 2026-01-01 04:01:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:12.070213 | orchestrator | 2026-01-01 04:01:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:12.070232 | orchestrator | 2026-01-01 04:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:15.117258 | orchestrator | 2026-01-01 04:01:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:15.119054 | orchestrator | 2026-01-01 04:01:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:15.119101 | orchestrator | 2026-01-01 04:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:18.165971 | orchestrator | 2026-01-01 04:01:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:18.167143 | orchestrator | 2026-01-01 04:01:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:18.167225 | orchestrator | 2026-01-01 04:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:21.209056 | orchestrator | 2026-01-01 04:01:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:21.211048 | orchestrator | 2026-01-01 04:01:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:21.211109 | orchestrator | 2026-01-01 04:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:24.260669 | orchestrator | 2026-01-01 04:01:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:24.263544 | orchestrator | 2026-01-01 04:01:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:24.263578 | orchestrator | 2026-01-01 04:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:27.310554 | orchestrator | 2026-01-01 04:01:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:27.312660 | orchestrator | 2026-01-01 04:01:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:27.312681 | orchestrator | 2026-01-01 04:01:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:30.360267 | orchestrator | 2026-01-01 04:01:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:30.360690 | orchestrator | 2026-01-01 04:01:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:30.360968 | orchestrator | 2026-01-01 04:01:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:33.399567 | orchestrator | 2026-01-01 04:01:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:33.400375 | orchestrator | 2026-01-01 04:01:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:33.400406 | orchestrator | 2026-01-01 04:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:36.440325 | orchestrator | 2026-01-01 04:01:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:36.441719 | orchestrator | 2026-01-01 04:01:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:36.441756 | orchestrator | 2026-01-01 04:01:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:39.498275 | orchestrator | 2026-01-01 04:01:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:39.501687 | orchestrator | 2026-01-01 04:01:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:39.501729 | orchestrator | 2026-01-01 04:01:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:42.548681 | orchestrator | 2026-01-01 04:01:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:42.551674 | orchestrator | 2026-01-01 04:01:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:42.551723 | orchestrator | 2026-01-01 04:01:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:45.601013 | orchestrator | 2026-01-01 04:01:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:45.602339 | orchestrator | 2026-01-01 04:01:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:45.602395 | orchestrator | 2026-01-01 04:01:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:48.655868 | orchestrator | 2026-01-01 04:01:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:48.658568 | orchestrator | 2026-01-01 04:01:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:48.658603 | orchestrator | 2026-01-01 04:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:51.706283 | orchestrator | 2026-01-01 04:01:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:51.707373 | orchestrator | 2026-01-01 04:01:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:51.707407 | orchestrator | 2026-01-01 04:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:54.762613 | orchestrator | 2026-01-01 04:01:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:54.764996 | orchestrator | 2026-01-01 04:01:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:54.765031 | orchestrator | 2026-01-01 04:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:57.811314 | orchestrator | 2026-01-01 04:01:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:01:57.813492 | orchestrator | 2026-01-01 04:01:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:01:57.813528 | orchestrator | 2026-01-01 04:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:00.862992 | orchestrator | 2026-01-01 04:02:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:00.865571 | orchestrator | 2026-01-01 04:02:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:00.865629 | orchestrator | 2026-01-01 04:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:03.912195 | orchestrator | 2026-01-01 04:02:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:03.913650 | orchestrator | 2026-01-01 04:02:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:03.913692 | orchestrator | 2026-01-01 04:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:06.960115 | orchestrator | 2026-01-01 04:02:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:06.962497 | orchestrator | 2026-01-01 04:02:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:06.962577 | orchestrator | 2026-01-01 04:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:10.022793 | orchestrator | 2026-01-01 04:02:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:10.022888 | orchestrator | 2026-01-01 04:02:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:10.022902 | orchestrator | 2026-01-01 04:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:13.060446 | orchestrator | 2026-01-01 04:02:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:13.061473 | orchestrator | 2026-01-01 04:02:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:13.061514 | orchestrator | 2026-01-01 04:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:16.109657 | orchestrator | 2026-01-01 04:02:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:16.110387 | orchestrator | 2026-01-01 04:02:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:16.110433 | orchestrator | 2026-01-01 04:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:19.153609 | orchestrator | 2026-01-01 04:02:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:19.156125 | orchestrator | 2026-01-01 04:02:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:19.156858 | orchestrator | 2026-01-01 04:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:22.194789 | orchestrator | 2026-01-01 04:02:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:22.195580 | orchestrator | 2026-01-01 04:02:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:22.195639 | orchestrator | 2026-01-01 04:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:25.234761 | orchestrator | 2026-01-01 04:02:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:25.237418 | orchestrator | 2026-01-01 04:02:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:25.237509 | orchestrator | 2026-01-01 04:02:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:28.285862 | orchestrator | 2026-01-01 04:02:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:28.288322 | orchestrator | 2026-01-01 04:02:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:28.288416 | orchestrator | 2026-01-01 04:02:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:31.335867 | orchestrator | 2026-01-01 04:02:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:31.339200 | orchestrator | 2026-01-01 04:02:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:31.339542 | orchestrator | 2026-01-01 04:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:34.388790 | orchestrator | 2026-01-01 04:02:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:34.390009 | orchestrator | 2026-01-01 04:02:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:34.390111 | orchestrator | 2026-01-01 04:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:37.443260 | orchestrator | 2026-01-01 04:02:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:37.445391 | orchestrator | 2026-01-01 04:02:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:37.445426 | orchestrator | 2026-01-01 04:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:40.494969 | orchestrator | 2026-01-01 04:02:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:40.495236 | orchestrator | 2026-01-01 04:02:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:40.495275 | orchestrator | 2026-01-01 04:02:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:43.548029 | orchestrator | 2026-01-01 04:02:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:43.549223 | orchestrator | 2026-01-01 04:02:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:43.549267 | orchestrator | 2026-01-01 04:02:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:46.583724 | orchestrator | 2026-01-01 04:02:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:46.584583 | orchestrator | 2026-01-01 04:02:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:46.584687 | orchestrator | 2026-01-01 04:02:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:49.636010 | orchestrator | 2026-01-01 04:02:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:49.637997 | orchestrator | 2026-01-01 04:02:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:49.638063 | orchestrator | 2026-01-01 04:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:52.682959 | orchestrator | 2026-01-01 04:02:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:52.684913 | orchestrator | 2026-01-01 04:02:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:52.685159 | orchestrator | 2026-01-01 04:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:55.736910 | orchestrator | 2026-01-01 04:02:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:55.738579 | orchestrator | 2026-01-01 04:02:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:55.738738 | orchestrator | 2026-01-01 04:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:58.789439 | orchestrator | 2026-01-01 04:02:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:02:58.792724 | orchestrator | 2026-01-01 04:02:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:02:58.792756 | orchestrator | 2026-01-01 04:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:01.836237 | orchestrator | 2026-01-01 04:03:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:01.837758 | orchestrator | 2026-01-01 04:03:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:01.837782 | orchestrator | 2026-01-01 04:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:04.889953 | orchestrator | 2026-01-01 04:03:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:04.890688 | orchestrator | 2026-01-01 04:03:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:04.890716 | orchestrator | 2026-01-01 04:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:07.940984 | orchestrator | 2026-01-01 04:03:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:07.942093 | orchestrator | 2026-01-01 04:03:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:07.942115 | orchestrator | 2026-01-01 04:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:10.991524 | orchestrator | 2026-01-01 04:03:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:10.994097 | orchestrator | 2026-01-01 04:03:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:10.994208 | orchestrator | 2026-01-01 04:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:14.041036 | orchestrator | 2026-01-01 04:03:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:14.042344 | orchestrator | 2026-01-01 04:03:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:14.042429 | orchestrator | 2026-01-01 04:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:17.087992 | orchestrator | 2026-01-01 04:03:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:17.089062 | orchestrator | 2026-01-01 04:03:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:17.089111 | orchestrator | 2026-01-01 04:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:20.140825 | orchestrator | 2026-01-01 04:03:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:20.142088 | orchestrator | 2026-01-01 04:03:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:20.142176 | orchestrator | 2026-01-01 04:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:23.173005 | orchestrator | 2026-01-01 04:03:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:23.173602 | orchestrator | 2026-01-01 04:03:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:23.173666 | orchestrator | 2026-01-01 04:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:26.231460 | orchestrator | 2026-01-01 04:03:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:26.235253 | orchestrator | 2026-01-01 04:03:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:26.235296 | orchestrator | 2026-01-01 04:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:29.276967 | orchestrator | 2026-01-01 04:03:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:29.279672 | orchestrator | 2026-01-01 04:03:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:29.279694 | orchestrator | 2026-01-01 04:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:32.331474 | orchestrator | 2026-01-01 04:03:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:32.333360 | orchestrator | 2026-01-01 04:03:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:32.333381 | orchestrator | 2026-01-01 04:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:35.374533 | orchestrator | 2026-01-01 04:03:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:35.374854 | orchestrator | 2026-01-01 04:03:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:35.374894 | orchestrator | 2026-01-01 04:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:38.417580 | orchestrator | 2026-01-01 04:03:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:38.420162 | orchestrator | 2026-01-01 04:03:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:38.420387 | orchestrator | 2026-01-01 04:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:41.467847 | orchestrator | 2026-01-01 04:03:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:41.468873 | orchestrator | 2026-01-01 04:03:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:41.468891 | orchestrator | 2026-01-01 04:03:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:44.525540 | orchestrator | 2026-01-01 04:03:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:44.527485 | orchestrator | 2026-01-01 04:03:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:44.527526 | orchestrator | 2026-01-01 04:03:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:47.572343 | orchestrator | 2026-01-01 04:03:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:47.574341 | orchestrator | 2026-01-01 04:03:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:47.574491 | orchestrator | 2026-01-01 04:03:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:50.621466 | orchestrator | 2026-01-01 04:03:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:50.623143 | orchestrator | 2026-01-01 04:03:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:50.623198 | orchestrator | 2026-01-01 04:03:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:53.674765 | orchestrator | 2026-01-01 04:03:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:53.676096 | orchestrator | 2026-01-01 04:03:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:53.676179 | orchestrator | 2026-01-01 04:03:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:56.724510 | orchestrator | 2026-01-01 04:03:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:56.725563 | orchestrator | 2026-01-01 04:03:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:56.725870 | orchestrator | 2026-01-01 04:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:59.772289 | orchestrator | 2026-01-01 04:03:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:03:59.774336 | orchestrator | 2026-01-01 04:03:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:03:59.774457 | orchestrator | 2026-01-01 04:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:02.815529 | orchestrator | 2026-01-01 04:04:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:02.817292 | orchestrator | 2026-01-01 04:04:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:02.817385 | orchestrator | 2026-01-01 04:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:05.867302 | orchestrator | 2026-01-01 04:04:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:05.868244 | orchestrator | 2026-01-01 04:04:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:05.868280 | orchestrator | 2026-01-01 04:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:08.918872 | orchestrator | 2026-01-01 04:04:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:08.920938 | orchestrator | 2026-01-01 04:04:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:08.921023 | orchestrator | 2026-01-01 04:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:11.968051 | orchestrator | 2026-01-01 04:04:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:11.970424 | orchestrator | 2026-01-01 04:04:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:11.970454 | orchestrator | 2026-01-01 04:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:15.023495 | orchestrator | 2026-01-01 04:04:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:15.024174 | orchestrator | 2026-01-01 04:04:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:15.024233 | orchestrator | 2026-01-01 04:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:18.068648 | orchestrator | 2026-01-01 04:04:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:18.071137 | orchestrator | 2026-01-01 04:04:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:18.071181 | orchestrator | 2026-01-01 04:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:21.114628 | orchestrator | 2026-01-01 04:04:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:21.115948 | orchestrator | 2026-01-01 04:04:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:21.115983 | orchestrator | 2026-01-01 04:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:24.167917 | orchestrator | 2026-01-01 04:04:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:24.169808 | orchestrator | 2026-01-01 04:04:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:24.169840 | orchestrator | 2026-01-01 04:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:27.232298 | orchestrator | 2026-01-01 04:04:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:27.234825 | orchestrator | 2026-01-01 04:04:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:27.234863 | orchestrator | 2026-01-01 04:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:30.279173 | orchestrator | 2026-01-01 04:04:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:30.280909 | orchestrator | 2026-01-01 04:04:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:30.280944 | orchestrator | 2026-01-01 04:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:33.330096 | orchestrator | 2026-01-01 04:04:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:33.333208 | orchestrator | 2026-01-01 04:04:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:33.333379 | orchestrator | 2026-01-01 04:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:36.381732 | orchestrator | 2026-01-01 04:04:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:36.382891 | orchestrator | 2026-01-01 04:04:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:36.382953 | orchestrator | 2026-01-01 04:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:39.427609 | orchestrator | 2026-01-01 04:04:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:39.429682 | orchestrator | 2026-01-01 04:04:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:39.429721 | orchestrator | 2026-01-01 04:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:42.481095 | orchestrator | 2026-01-01 04:04:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:42.482976 | orchestrator | 2026-01-01 04:04:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:42.483025 | orchestrator | 2026-01-01 04:04:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:45.538418 | orchestrator | 2026-01-01 04:04:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:45.538556 | orchestrator | 2026-01-01 04:04:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:45.538584 | orchestrator | 2026-01-01 04:04:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:48.587267 | orchestrator | 2026-01-01 04:04:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:48.588785 | orchestrator | 2026-01-01 04:04:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:48.588810 | orchestrator | 2026-01-01 04:04:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:51.642134 | orchestrator | 2026-01-01 04:04:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:51.642665 | orchestrator | 2026-01-01 04:04:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:51.642711 | orchestrator | 2026-01-01 04:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:54.690253 | orchestrator | 2026-01-01 04:04:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:54.691482 | orchestrator | 2026-01-01 04:04:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:54.691567 | orchestrator | 2026-01-01 04:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:57.749681 | orchestrator | 2026-01-01 04:04:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:04:57.751780 | orchestrator | 2026-01-01 04:04:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:04:57.751820 | orchestrator | 2026-01-01 04:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:00.803250 | orchestrator | 2026-01-01 04:05:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:00.805685 | orchestrator | 2026-01-01 04:05:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:00.805705 | orchestrator | 2026-01-01 04:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:03.853709 | orchestrator | 2026-01-01 04:05:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:03.855301 | orchestrator | 2026-01-01 04:05:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:03.855350 | orchestrator | 2026-01-01 04:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:06.913043 | orchestrator | 2026-01-01 04:05:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:06.914688 | orchestrator | 2026-01-01 04:05:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:06.914717 | orchestrator | 2026-01-01 04:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:09.959277 | orchestrator | 2026-01-01 04:05:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:09.960806 | orchestrator | 2026-01-01 04:05:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:09.960905 | orchestrator | 2026-01-01 04:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:13.006165 | orchestrator | 2026-01-01 04:05:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:13.007517 | orchestrator | 2026-01-01 04:05:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:13.007612 | orchestrator | 2026-01-01 04:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:16.048540 | orchestrator | 2026-01-01 04:05:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:16.049478 | orchestrator | 2026-01-01 04:05:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:16.049535 | orchestrator | 2026-01-01 04:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:19.099216 | orchestrator | 2026-01-01 04:05:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:19.100348 | orchestrator | 2026-01-01 04:05:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:19.100386 | orchestrator | 2026-01-01 04:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:22.141466 | orchestrator | 2026-01-01 04:05:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:22.142866 | orchestrator | 2026-01-01 04:05:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:22.142917 | orchestrator | 2026-01-01 04:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:25.191784 | orchestrator | 2026-01-01 04:05:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:25.193292 | orchestrator | 2026-01-01 04:05:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:25.193306 | orchestrator | 2026-01-01 04:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:28.242737 | orchestrator | 2026-01-01 04:05:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:28.245330 | orchestrator | 2026-01-01 04:05:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:28.245388 | orchestrator | 2026-01-01 04:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:31.287811 | orchestrator | 2026-01-01 04:05:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:31.289871 | orchestrator | 2026-01-01 04:05:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:31.289902 | orchestrator | 2026-01-01 04:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:34.343710 | orchestrator | 2026-01-01 04:05:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:34.346584 | orchestrator | 2026-01-01 04:05:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:34.346622 | orchestrator | 2026-01-01 04:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:37.395275 | orchestrator | 2026-01-01 04:05:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:37.397408 | orchestrator | 2026-01-01 04:05:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:37.397536 | orchestrator | 2026-01-01 04:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:40.447271 | orchestrator | 2026-01-01 04:05:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:40.448933 | orchestrator | 2026-01-01 04:05:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:40.448961 | orchestrator | 2026-01-01 04:05:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:43.504238 | orchestrator | 2026-01-01 04:05:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:43.505646 | orchestrator | 2026-01-01 04:05:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:43.505672 | orchestrator | 2026-01-01 04:05:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:46.544791 | orchestrator | 2026-01-01 04:05:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:46.545763 | orchestrator | 2026-01-01 04:05:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:46.545794 | orchestrator | 2026-01-01 04:05:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:49.596920 | orchestrator | 2026-01-01 04:05:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:49.598669 | orchestrator | 2026-01-01 04:05:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:49.598981 | orchestrator | 2026-01-01 04:05:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:52.644774 | orchestrator | 2026-01-01 04:05:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:52.646386 | orchestrator | 2026-01-01 04:05:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:52.646504 | orchestrator | 2026-01-01 04:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:55.701564 | orchestrator | 2026-01-01 04:05:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:55.701673 | orchestrator | 2026-01-01 04:05:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:55.701760 | orchestrator | 2026-01-01 04:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:58.746274 | orchestrator | 2026-01-01 04:05:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:05:58.747991 | orchestrator | 2026-01-01 04:05:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:05:58.748051 | orchestrator | 2026-01-01 04:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:01.797785 | orchestrator | 2026-01-01 04:06:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:01.799369 | orchestrator | 2026-01-01 04:06:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:01.799558 | orchestrator | 2026-01-01 04:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:04.844670 | orchestrator | 2026-01-01 04:06:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:04.845844 | orchestrator | 2026-01-01 04:06:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:04.845874 | orchestrator | 2026-01-01 04:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:07.890248 | orchestrator | 2026-01-01 04:06:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:07.892432 | orchestrator | 2026-01-01 04:06:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:07.892557 | orchestrator | 2026-01-01 04:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:10.940205 | orchestrator | 2026-01-01 04:06:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:10.942294 | orchestrator | 2026-01-01 04:06:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:10.942400 | orchestrator | 2026-01-01 04:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:13.997340 | orchestrator | 2026-01-01 04:06:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:13.999153 | orchestrator | 2026-01-01 04:06:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:13.999195 | orchestrator | 2026-01-01 04:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:17.046812 | orchestrator | 2026-01-01 04:06:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:17.049017 | orchestrator | 2026-01-01 04:06:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:17.049052 | orchestrator | 2026-01-01 04:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:20.097023 | orchestrator | 2026-01-01 04:06:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:20.097827 | orchestrator | 2026-01-01 04:06:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:20.097858 | orchestrator | 2026-01-01 04:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:23.130827 | orchestrator | 2026-01-01 04:06:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:23.133513 | orchestrator | 2026-01-01 04:06:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:23.133649 | orchestrator | 2026-01-01 04:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:26.173338 | orchestrator | 2026-01-01 04:06:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:26.175305 | orchestrator | 2026-01-01 04:06:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:26.175351 | orchestrator | 2026-01-01 04:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:29.225253 | orchestrator | 2026-01-01 04:06:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:29.226341 | orchestrator | 2026-01-01 04:06:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:29.226374 | orchestrator | 2026-01-01 04:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:32.273068 | orchestrator | 2026-01-01 04:06:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:32.274093 | orchestrator | 2026-01-01 04:06:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:32.274125 | orchestrator | 2026-01-01 04:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:35.322268 | orchestrator | 2026-01-01 04:06:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:35.323617 | orchestrator | 2026-01-01 04:06:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:35.323652 | orchestrator | 2026-01-01 04:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:38.370950 | orchestrator | 2026-01-01 04:06:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:38.373188 | orchestrator | 2026-01-01 04:06:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:38.373240 | orchestrator | 2026-01-01 04:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:41.415638 | orchestrator | 2026-01-01 04:06:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:41.417520 | orchestrator | 2026-01-01 04:06:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:41.417549 | orchestrator | 2026-01-01 04:06:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:44.466202 | orchestrator | 2026-01-01 04:06:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:44.467712 | orchestrator | 2026-01-01 04:06:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:44.467741 | orchestrator | 2026-01-01 04:06:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:47.516939 | orchestrator | 2026-01-01 04:06:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:47.518137 | orchestrator | 2026-01-01 04:06:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:47.518213 | orchestrator | 2026-01-01 04:06:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:50.563793 | orchestrator | 2026-01-01 04:06:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:50.564457 | orchestrator | 2026-01-01 04:06:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:50.564606 | orchestrator | 2026-01-01 04:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:53.618508 | orchestrator | 2026-01-01 04:06:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:53.620331 | orchestrator | 2026-01-01 04:06:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:53.620652 | orchestrator | 2026-01-01 04:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:56.676208 | orchestrator | 2026-01-01 04:06:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:56.677955 | orchestrator | 2026-01-01 04:06:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:56.678091 | orchestrator | 2026-01-01 04:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:59.736094 | orchestrator | 2026-01-01 04:06:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:06:59.737842 | orchestrator | 2026-01-01 04:06:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:06:59.737871 | orchestrator | 2026-01-01 04:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:02.780468 | orchestrator | 2026-01-01 04:07:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:02.782318 | orchestrator | 2026-01-01 04:07:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:02.782387 | orchestrator | 2026-01-01 04:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:05.830289 | orchestrator | 2026-01-01 04:07:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:05.833361 | orchestrator | 2026-01-01 04:07:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:05.833413 | orchestrator | 2026-01-01 04:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:08.884016 | orchestrator | 2026-01-01 04:07:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:08.887700 | orchestrator | 2026-01-01 04:07:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:08.887736 | orchestrator | 2026-01-01 04:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:11.931108 | orchestrator | 2026-01-01 04:07:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:11.932811 | orchestrator | 2026-01-01 04:07:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:11.932862 | orchestrator | 2026-01-01 04:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:14.981713 | orchestrator | 2026-01-01 04:07:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:14.983159 | orchestrator | 2026-01-01 04:07:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:14.983198 | orchestrator | 2026-01-01 04:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:18.030564 | orchestrator | 2026-01-01 04:07:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:18.032645 | orchestrator | 2026-01-01 04:07:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:18.032707 | orchestrator | 2026-01-01 04:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:21.085065 | orchestrator | 2026-01-01 04:07:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:21.085680 | orchestrator | 2026-01-01 04:07:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:21.085722 | orchestrator | 2026-01-01 04:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:24.132749 | orchestrator | 2026-01-01 04:07:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:24.134083 | orchestrator | 2026-01-01 04:07:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:24.134179 | orchestrator | 2026-01-01 04:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:27.186830 | orchestrator | 2026-01-01 04:07:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:27.188643 | orchestrator | 2026-01-01 04:07:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:27.189080 | orchestrator | 2026-01-01 04:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:30.251321 | orchestrator | 2026-01-01 04:07:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:30.252086 | orchestrator | 2026-01-01 04:07:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:30.252123 | orchestrator | 2026-01-01 04:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:33.304582 | orchestrator | 2026-01-01 04:07:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:33.307303 | orchestrator | 2026-01-01 04:07:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:33.307342 | orchestrator | 2026-01-01 04:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:36.356860 | orchestrator | 2026-01-01 04:07:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:36.359435 | orchestrator | 2026-01-01 04:07:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:36.359462 | orchestrator | 2026-01-01 04:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:39.406468 | orchestrator | 2026-01-01 04:07:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:39.409111 | orchestrator | 2026-01-01 04:07:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:39.409153 | orchestrator | 2026-01-01 04:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:42.455667 | orchestrator | 2026-01-01 04:07:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:42.457463 | orchestrator | 2026-01-01 04:07:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:42.457507 | orchestrator | 2026-01-01 04:07:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:45.505891 | orchestrator | 2026-01-01 04:07:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:45.506862 | orchestrator | 2026-01-01 04:07:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:45.506903 | orchestrator | 2026-01-01 04:07:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:48.556804 | orchestrator | 2026-01-01 04:07:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:48.559798 | orchestrator | 2026-01-01 04:07:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:48.559874 | orchestrator | 2026-01-01 04:07:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:51.602515 | orchestrator | 2026-01-01 04:07:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:51.603687 | orchestrator | 2026-01-01 04:07:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:51.603718 | orchestrator | 2026-01-01 04:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:54.646349 | orchestrator | 2026-01-01 04:07:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:54.648396 | orchestrator | 2026-01-01 04:07:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:54.648731 | orchestrator | 2026-01-01 04:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:57.687760 | orchestrator | 2026-01-01 04:07:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:07:57.690545 | orchestrator | 2026-01-01 04:07:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:07:57.690572 | orchestrator | 2026-01-01 04:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:00.744085 | orchestrator | 2026-01-01 04:08:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:00.746158 | orchestrator | 2026-01-01 04:08:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:00.746208 | orchestrator | 2026-01-01 04:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:03.786646 | orchestrator | 2026-01-01 04:08:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:03.787939 | orchestrator | 2026-01-01 04:08:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:03.787999 | orchestrator | 2026-01-01 04:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:06.845695 | orchestrator | 2026-01-01 04:08:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:06.848678 | orchestrator | 2026-01-01 04:08:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:06.848726 | orchestrator | 2026-01-01 04:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:09.904461 | orchestrator | 2026-01-01 04:08:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:09.906540 | orchestrator | 2026-01-01 04:08:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:09.906645 | orchestrator | 2026-01-01 04:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:12.955792 | orchestrator | 2026-01-01 04:08:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:12.957790 | orchestrator | 2026-01-01 04:08:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:12.957823 | orchestrator | 2026-01-01 04:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:16.010115 | orchestrator | 2026-01-01 04:08:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:16.013685 | orchestrator | 2026-01-01 04:08:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:16.013718 | orchestrator | 2026-01-01 04:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:19.052059 | orchestrator | 2026-01-01 04:08:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:19.053550 | orchestrator | 2026-01-01 04:08:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:19.053566 | orchestrator | 2026-01-01 04:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:22.091880 | orchestrator | 2026-01-01 04:08:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:22.093866 | orchestrator | 2026-01-01 04:08:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:22.093960 | orchestrator | 2026-01-01 04:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:25.134138 | orchestrator | 2026-01-01 04:08:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:25.135825 | orchestrator | 2026-01-01 04:08:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:25.135928 | orchestrator | 2026-01-01 04:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:28.181674 | orchestrator | 2026-01-01 04:08:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:28.183807 | orchestrator | 2026-01-01 04:08:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:28.183831 | orchestrator | 2026-01-01 04:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:31.235102 | orchestrator | 2026-01-01 04:08:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:31.236904 | orchestrator | 2026-01-01 04:08:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:31.236937 | orchestrator | 2026-01-01 04:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:34.283705 | orchestrator | 2026-01-01 04:08:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:34.285956 | orchestrator | 2026-01-01 04:08:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:34.286283 | orchestrator | 2026-01-01 04:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:37.329725 | orchestrator | 2026-01-01 04:08:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:37.332139 | orchestrator | 2026-01-01 04:08:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:37.332206 | orchestrator | 2026-01-01 04:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:40.382598 | orchestrator | 2026-01-01 04:08:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:40.382832 | orchestrator | 2026-01-01 04:08:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:40.382862 | orchestrator | 2026-01-01 04:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:43.438181 | orchestrator | 2026-01-01 04:08:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:43.440308 | orchestrator | 2026-01-01 04:08:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:43.440661 | orchestrator | 2026-01-01 04:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:46.488996 | orchestrator | 2026-01-01 04:08:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:46.491516 | orchestrator | 2026-01-01 04:08:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:46.491547 | orchestrator | 2026-01-01 04:08:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:49.538970 | orchestrator | 2026-01-01 04:08:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:49.539191 | orchestrator | 2026-01-01 04:08:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:49.539223 | orchestrator | 2026-01-01 04:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:52.582266 | orchestrator | 2026-01-01 04:08:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:52.583359 | orchestrator | 2026-01-01 04:08:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:52.583390 | orchestrator | 2026-01-01 04:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:55.627620 | orchestrator | 2026-01-01 04:08:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:55.628914 | orchestrator | 2026-01-01 04:08:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:55.628993 | orchestrator | 2026-01-01 04:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:58.681415 | orchestrator | 2026-01-01 04:08:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:08:58.683254 | orchestrator | 2026-01-01 04:08:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:08:58.683477 | orchestrator | 2026-01-01 04:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:01.726794 | orchestrator | 2026-01-01 04:09:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:01.728580 | orchestrator | 2026-01-01 04:09:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:01.728611 | orchestrator | 2026-01-01 04:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:04.782744 | orchestrator | 2026-01-01 04:09:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:04.783745 | orchestrator | 2026-01-01 04:09:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:04.783777 | orchestrator | 2026-01-01 04:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:07.831242 | orchestrator | 2026-01-01 04:09:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:07.832251 | orchestrator | 2026-01-01 04:09:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:07.832296 | orchestrator | 2026-01-01 04:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:10.887788 | orchestrator | 2026-01-01 04:09:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:10.887864 | orchestrator | 2026-01-01 04:09:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:10.887871 | orchestrator | 2026-01-01 04:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:13.930575 | orchestrator | 2026-01-01 04:09:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:13.933509 | orchestrator | 2026-01-01 04:09:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:13.933537 | orchestrator | 2026-01-01 04:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:16.985917 | orchestrator | 2026-01-01 04:09:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:16.987096 | orchestrator | 2026-01-01 04:09:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:16.987506 | orchestrator | 2026-01-01 04:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:20.046755 | orchestrator | 2026-01-01 04:09:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:20.047408 | orchestrator | 2026-01-01 04:09:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:20.047440 | orchestrator | 2026-01-01 04:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:23.085481 | orchestrator | 2026-01-01 04:09:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:23.086133 | orchestrator | 2026-01-01 04:09:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:23.086238 | orchestrator | 2026-01-01 04:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:26.135998 | orchestrator | 2026-01-01 04:09:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:26.137488 | orchestrator | 2026-01-01 04:09:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:26.137534 | orchestrator | 2026-01-01 04:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:29.181224 | orchestrator | 2026-01-01 04:09:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:29.183305 | orchestrator | 2026-01-01 04:09:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:29.183408 | orchestrator | 2026-01-01 04:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:32.226365 | orchestrator | 2026-01-01 04:09:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:32.227513 | orchestrator | 2026-01-01 04:09:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:32.227542 | orchestrator | 2026-01-01 04:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:35.280876 | orchestrator | 2026-01-01 04:09:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:35.281989 | orchestrator | 2026-01-01 04:09:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:35.282121 | orchestrator | 2026-01-01 04:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:38.327245 | orchestrator | 2026-01-01 04:09:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:38.328805 | orchestrator | 2026-01-01 04:09:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:38.328978 | orchestrator | 2026-01-01 04:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:41.382750 | orchestrator | 2026-01-01 04:09:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:41.384017 | orchestrator | 2026-01-01 04:09:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:41.384058 | orchestrator | 2026-01-01 04:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:44.434985 | orchestrator | 2026-01-01 04:09:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:44.436191 | orchestrator | 2026-01-01 04:09:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:44.436270 | orchestrator | 2026-01-01 04:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:47.485083 | orchestrator | 2026-01-01 04:09:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:47.488082 | orchestrator | 2026-01-01 04:09:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:47.488124 | orchestrator | 2026-01-01 04:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:50.534807 | orchestrator | 2026-01-01 04:09:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:50.536221 | orchestrator | 2026-01-01 04:09:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:50.536293 | orchestrator | 2026-01-01 04:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:53.588757 | orchestrator | 2026-01-01 04:09:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:53.591965 | orchestrator | 2026-01-01 04:09:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:53.592068 | orchestrator | 2026-01-01 04:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:56.632990 | orchestrator | 2026-01-01 04:09:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:56.634564 | orchestrator | 2026-01-01 04:09:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:56.634591 | orchestrator | 2026-01-01 04:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:59.691439 | orchestrator | 2026-01-01 04:09:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:09:59.693271 | orchestrator | 2026-01-01 04:09:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:09:59.693348 | orchestrator | 2026-01-01 04:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:02.733380 | orchestrator | 2026-01-01 04:10:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:02.734538 | orchestrator | 2026-01-01 04:10:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:02.734640 | orchestrator | 2026-01-01 04:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:05.775411 | orchestrator | 2026-01-01 04:10:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:05.776392 | orchestrator | 2026-01-01 04:10:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:05.776426 | orchestrator | 2026-01-01 04:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:08.828952 | orchestrator | 2026-01-01 04:10:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:08.831410 | orchestrator | 2026-01-01 04:10:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:08.831445 | orchestrator | 2026-01-01 04:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:11.873265 | orchestrator | 2026-01-01 04:10:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:11.874126 | orchestrator | 2026-01-01 04:10:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:11.874223 | orchestrator | 2026-01-01 04:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:14.922491 | orchestrator | 2026-01-01 04:10:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:14.925130 | orchestrator | 2026-01-01 04:10:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:14.925239 | orchestrator | 2026-01-01 04:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:17.976399 | orchestrator | 2026-01-01 04:10:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:17.979343 | orchestrator | 2026-01-01 04:10:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:17.979604 | orchestrator | 2026-01-01 04:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:21.048266 | orchestrator | 2026-01-01 04:10:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:21.049561 | orchestrator | 2026-01-01 04:10:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:21.049589 | orchestrator | 2026-01-01 04:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:24.080581 | orchestrator | 2026-01-01 04:10:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:24.081603 | orchestrator | 2026-01-01 04:10:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:24.081667 | orchestrator | 2026-01-01 04:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:27.128618 | orchestrator | 2026-01-01 04:10:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:27.129934 | orchestrator | 2026-01-01 04:10:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:27.129972 | orchestrator | 2026-01-01 04:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:30.178183 | orchestrator | 2026-01-01 04:10:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:30.181081 | orchestrator | 2026-01-01 04:10:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:30.181117 | orchestrator | 2026-01-01 04:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:33.220430 | orchestrator | 2026-01-01 04:10:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:33.221034 | orchestrator | 2026-01-01 04:10:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:33.221110 | orchestrator | 2026-01-01 04:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:36.268206 | orchestrator | 2026-01-01 04:10:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:36.270542 | orchestrator | 2026-01-01 04:10:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:36.270610 | orchestrator | 2026-01-01 04:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:39.317547 | orchestrator | 2026-01-01 04:10:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:39.318896 | orchestrator | 2026-01-01 04:10:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:39.318921 | orchestrator | 2026-01-01 04:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:42.370743 | orchestrator | 2026-01-01 04:10:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:42.372736 | orchestrator | 2026-01-01 04:10:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:42.372772 | orchestrator | 2026-01-01 04:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:45.412345 | orchestrator | 2026-01-01 04:10:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:45.413012 | orchestrator | 2026-01-01 04:10:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:45.413044 | orchestrator | 2026-01-01 04:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:48.458671 | orchestrator | 2026-01-01 04:10:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:48.461179 | orchestrator | 2026-01-01 04:10:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:48.461220 | orchestrator | 2026-01-01 04:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:51.509447 | orchestrator | 2026-01-01 04:10:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:51.512154 | orchestrator | 2026-01-01 04:10:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:51.512195 | orchestrator | 2026-01-01 04:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:54.561891 | orchestrator | 2026-01-01 04:10:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:54.564952 | orchestrator | 2026-01-01 04:10:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:54.565018 | orchestrator | 2026-01-01 04:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:57.605745 | orchestrator | 2026-01-01 04:10:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:10:57.607411 | orchestrator | 2026-01-01 04:10:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:10:57.607470 | orchestrator | 2026-01-01 04:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:00.660952 | orchestrator | 2026-01-01 04:11:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:00.662241 | orchestrator | 2026-01-01 04:11:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:00.662374 | orchestrator | 2026-01-01 04:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:03.701381 | orchestrator | 2026-01-01 04:11:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:03.703330 | orchestrator | 2026-01-01 04:11:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:03.703435 | orchestrator | 2026-01-01 04:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:06.748733 | orchestrator | 2026-01-01 04:11:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:06.750556 | orchestrator | 2026-01-01 04:11:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:06.750595 | orchestrator | 2026-01-01 04:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:09.805775 | orchestrator | 2026-01-01 04:11:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:09.807149 | orchestrator | 2026-01-01 04:11:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:09.807191 | orchestrator | 2026-01-01 04:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:12.857901 | orchestrator | 2026-01-01 04:11:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:12.860020 | orchestrator | 2026-01-01 04:11:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:12.860085 | orchestrator | 2026-01-01 04:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:15.903139 | orchestrator | 2026-01-01 04:11:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:15.905845 | orchestrator | 2026-01-01 04:11:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:15.905921 | orchestrator | 2026-01-01 04:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:18.959175 | orchestrator | 2026-01-01 04:11:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:18.962466 | orchestrator | 2026-01-01 04:11:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:18.962509 | orchestrator | 2026-01-01 04:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:22.014074 | orchestrator | 2026-01-01 04:11:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:22.015798 | orchestrator | 2026-01-01 04:11:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:22.015898 | orchestrator | 2026-01-01 04:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:25.058945 | orchestrator | 2026-01-01 04:11:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:25.060377 | orchestrator | 2026-01-01 04:11:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:25.060410 | orchestrator | 2026-01-01 04:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:28.103813 | orchestrator | 2026-01-01 04:11:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:28.108358 | orchestrator | 2026-01-01 04:11:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:28.108408 | orchestrator | 2026-01-01 04:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:31.168454 | orchestrator | 2026-01-01 04:11:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:31.169845 | orchestrator | 2026-01-01 04:11:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:31.169883 | orchestrator | 2026-01-01 04:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:34.210956 | orchestrator | 2026-01-01 04:11:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:34.213810 | orchestrator | 2026-01-01 04:11:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:34.213910 | orchestrator | 2026-01-01 04:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:37.254266 | orchestrator | 2026-01-01 04:11:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:37.255092 | orchestrator | 2026-01-01 04:11:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:37.255114 | orchestrator | 2026-01-01 04:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:40.298947 | orchestrator | 2026-01-01 04:11:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:40.301259 | orchestrator | 2026-01-01 04:11:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:40.301382 | orchestrator | 2026-01-01 04:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:43.361827 | orchestrator | 2026-01-01 04:11:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:43.364852 | orchestrator | 2026-01-01 04:11:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:43.364925 | orchestrator | 2026-01-01 04:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:46.397484 | orchestrator | 2026-01-01 04:11:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:46.398350 | orchestrator | 2026-01-01 04:11:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:46.398394 | orchestrator | 2026-01-01 04:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:49.451502 | orchestrator | 2026-01-01 04:11:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:49.452744 | orchestrator | 2026-01-01 04:11:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:49.452808 | orchestrator | 2026-01-01 04:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:52.505268 | orchestrator | 2026-01-01 04:11:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:52.508241 | orchestrator | 2026-01-01 04:11:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:52.508371 | orchestrator | 2026-01-01 04:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:55.559798 | orchestrator | 2026-01-01 04:11:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:55.560404 | orchestrator | 2026-01-01 04:11:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:55.560560 | orchestrator | 2026-01-01 04:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:58.605861 | orchestrator | 2026-01-01 04:11:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:11:58.606214 | orchestrator | 2026-01-01 04:11:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:11:58.606244 | orchestrator | 2026-01-01 04:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:01.652055 | orchestrator | 2026-01-01 04:12:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:01.653091 | orchestrator | 2026-01-01 04:12:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:01.653130 | orchestrator | 2026-01-01 04:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:04.702142 | orchestrator | 2026-01-01 04:12:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:04.703502 | orchestrator | 2026-01-01 04:12:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:04.703631 | orchestrator | 2026-01-01 04:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:07.751566 | orchestrator | 2026-01-01 04:12:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:07.753347 | orchestrator | 2026-01-01 04:12:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:07.753384 | orchestrator | 2026-01-01 04:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:10.798545 | orchestrator | 2026-01-01 04:12:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:10.799154 | orchestrator | 2026-01-01 04:12:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:10.799187 | orchestrator | 2026-01-01 04:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:13.846187 | orchestrator | 2026-01-01 04:12:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:13.847432 | orchestrator | 2026-01-01 04:12:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:13.847513 | orchestrator | 2026-01-01 04:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:16.886338 | orchestrator | 2026-01-01 04:12:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:16.887598 | orchestrator | 2026-01-01 04:12:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:16.887631 | orchestrator | 2026-01-01 04:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:19.941935 | orchestrator | 2026-01-01 04:12:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:19.944022 | orchestrator | 2026-01-01 04:12:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:19.944054 | orchestrator | 2026-01-01 04:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:22.994251 | orchestrator | 2026-01-01 04:12:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:22.996477 | orchestrator | 2026-01-01 04:12:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:22.996606 | orchestrator | 2026-01-01 04:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:26.050410 | orchestrator | 2026-01-01 04:12:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:26.052560 | orchestrator | 2026-01-01 04:12:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:26.052883 | orchestrator | 2026-01-01 04:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:29.101691 | orchestrator | 2026-01-01 04:12:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:29.103178 | orchestrator | 2026-01-01 04:12:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:29.103214 | orchestrator | 2026-01-01 04:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:32.141840 | orchestrator | 2026-01-01 04:12:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:32.145383 | orchestrator | 2026-01-01 04:12:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:32.146101 | orchestrator | 2026-01-01 04:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:35.182707 | orchestrator | 2026-01-01 04:12:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:35.184898 | orchestrator | 2026-01-01 04:12:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:35.184934 | orchestrator | 2026-01-01 04:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:38.230379 | orchestrator | 2026-01-01 04:12:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:38.231570 | orchestrator | 2026-01-01 04:12:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:38.231602 | orchestrator | 2026-01-01 04:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:41.276863 | orchestrator | 2026-01-01 04:12:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:41.277636 | orchestrator | 2026-01-01 04:12:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:41.277670 | orchestrator | 2026-01-01 04:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:44.325980 | orchestrator | 2026-01-01 04:12:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:44.329678 | orchestrator | 2026-01-01 04:12:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:44.329759 | orchestrator | 2026-01-01 04:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:47.383905 | orchestrator | 2026-01-01 04:12:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:47.386242 | orchestrator | 2026-01-01 04:12:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:47.386391 | orchestrator | 2026-01-01 04:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:50.430571 | orchestrator | 2026-01-01 04:12:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:50.431533 | orchestrator | 2026-01-01 04:12:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:50.431576 | orchestrator | 2026-01-01 04:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:53.478409 | orchestrator | 2026-01-01 04:12:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:53.479703 | orchestrator | 2026-01-01 04:12:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:53.479763 | orchestrator | 2026-01-01 04:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:56.519563 | orchestrator | 2026-01-01 04:12:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:56.521448 | orchestrator | 2026-01-01 04:12:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:56.521490 | orchestrator | 2026-01-01 04:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:59.576518 | orchestrator | 2026-01-01 04:12:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:12:59.577985 | orchestrator | 2026-01-01 04:12:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:12:59.578066 | orchestrator | 2026-01-01 04:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:02.627981 | orchestrator | 2026-01-01 04:13:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:02.630569 | orchestrator | 2026-01-01 04:13:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:02.630606 | orchestrator | 2026-01-01 04:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:05.673830 | orchestrator | 2026-01-01 04:13:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:05.675598 | orchestrator | 2026-01-01 04:13:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:05.675666 | orchestrator | 2026-01-01 04:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:08.722744 | orchestrator | 2026-01-01 04:13:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:08.725768 | orchestrator | 2026-01-01 04:13:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:08.725843 | orchestrator | 2026-01-01 04:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:11.769522 | orchestrator | 2026-01-01 04:13:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:11.771145 | orchestrator | 2026-01-01 04:13:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:11.771187 | orchestrator | 2026-01-01 04:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:14.819428 | orchestrator | 2026-01-01 04:13:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:14.821292 | orchestrator | 2026-01-01 04:13:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:14.821328 | orchestrator | 2026-01-01 04:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:17.869863 | orchestrator | 2026-01-01 04:13:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:17.872861 | orchestrator | 2026-01-01 04:13:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:17.872932 | orchestrator | 2026-01-01 04:13:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:20.921046 | orchestrator | 2026-01-01 04:13:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:20.922200 | orchestrator | 2026-01-01 04:13:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:20.922358 | orchestrator | 2026-01-01 04:13:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:23.973122 | orchestrator | 2026-01-01 04:13:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:23.974889 | orchestrator | 2026-01-01 04:13:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:23.974925 | orchestrator | 2026-01-01 04:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:27.018664 | orchestrator | 2026-01-01 04:13:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:27.021711 | orchestrator | 2026-01-01 04:13:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:27.021811 | orchestrator | 2026-01-01 04:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:30.060067 | orchestrator | 2026-01-01 04:13:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:30.061439 | orchestrator | 2026-01-01 04:13:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:30.061473 | orchestrator | 2026-01-01 04:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:33.108373 | orchestrator | 2026-01-01 04:13:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:33.109873 | orchestrator | 2026-01-01 04:13:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:33.110715 | orchestrator | 2026-01-01 04:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:36.156795 | orchestrator | 2026-01-01 04:13:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:36.158342 | orchestrator | 2026-01-01 04:13:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:36.158447 | orchestrator | 2026-01-01 04:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:39.198887 | orchestrator | 2026-01-01 04:13:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:39.200523 | orchestrator | 2026-01-01 04:13:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:39.200574 | orchestrator | 2026-01-01 04:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:42.245839 | orchestrator | 2026-01-01 04:13:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:42.245972 | orchestrator | 2026-01-01 04:13:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:42.246001 | orchestrator | 2026-01-01 04:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:45.295521 | orchestrator | 2026-01-01 04:13:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:45.296431 | orchestrator | 2026-01-01 04:13:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:45.296454 | orchestrator | 2026-01-01 04:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:48.342587 | orchestrator | 2026-01-01 04:13:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:48.344634 | orchestrator | 2026-01-01 04:13:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:48.344733 | orchestrator | 2026-01-01 04:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:51.385509 | orchestrator | 2026-01-01 04:13:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:51.386880 | orchestrator | 2026-01-01 04:13:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:51.386963 | orchestrator | 2026-01-01 04:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:54.428818 | orchestrator | 2026-01-01 04:13:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:54.430711 | orchestrator | 2026-01-01 04:13:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:54.430803 | orchestrator | 2026-01-01 04:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:57.481728 | orchestrator | 2026-01-01 04:13:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:13:57.483395 | orchestrator | 2026-01-01 04:13:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:13:57.483436 | orchestrator | 2026-01-01 04:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:00.530376 | orchestrator | 2026-01-01 04:14:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:00.533035 | orchestrator | 2026-01-01 04:14:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:00.533075 | orchestrator | 2026-01-01 04:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:03.576689 | orchestrator | 2026-01-01 04:14:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:03.577599 | orchestrator | 2026-01-01 04:14:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:03.577956 | orchestrator | 2026-01-01 04:14:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:06.621814 | orchestrator | 2026-01-01 04:14:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:06.623752 | orchestrator | 2026-01-01 04:14:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:06.624242 | orchestrator | 2026-01-01 04:14:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:09.663913 | orchestrator | 2026-01-01 04:14:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:09.667461 | orchestrator | 2026-01-01 04:14:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:09.667594 | orchestrator | 2026-01-01 04:14:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:12.715106 | orchestrator | 2026-01-01 04:14:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:12.717173 | orchestrator | 2026-01-01 04:14:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:12.717368 | orchestrator | 2026-01-01 04:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:15.771919 | orchestrator | 2026-01-01 04:14:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:15.776412 | orchestrator | 2026-01-01 04:14:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:15.776495 | orchestrator | 2026-01-01 04:14:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:18.825431 | orchestrator | 2026-01-01 04:14:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:18.827696 | orchestrator | 2026-01-01 04:14:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:18.827732 | orchestrator | 2026-01-01 04:14:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:21.876708 | orchestrator | 2026-01-01 04:14:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:21.878523 | orchestrator | 2026-01-01 04:14:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:21.878562 | orchestrator | 2026-01-01 04:14:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:24.923676 | orchestrator | 2026-01-01 04:14:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:24.925220 | orchestrator | 2026-01-01 04:14:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:24.925293 | orchestrator | 2026-01-01 04:14:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:27.976980 | orchestrator | 2026-01-01 04:14:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:27.977704 | orchestrator | 2026-01-01 04:14:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:27.977734 | orchestrator | 2026-01-01 04:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:31.032488 | orchestrator | 2026-01-01 04:14:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:31.033112 | orchestrator | 2026-01-01 04:14:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:31.033140 | orchestrator | 2026-01-01 04:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:34.082373 | orchestrator | 2026-01-01 04:14:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:34.084139 | orchestrator | 2026-01-01 04:14:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:34.084177 | orchestrator | 2026-01-01 04:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:37.122443 | orchestrator | 2026-01-01 04:14:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:37.123513 | orchestrator | 2026-01-01 04:14:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:37.123561 | orchestrator | 2026-01-01 04:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:40.165932 | orchestrator | 2026-01-01 04:14:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:40.167873 | orchestrator | 2026-01-01 04:14:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:40.167932 | orchestrator | 2026-01-01 04:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:43.217710 | orchestrator | 2026-01-01 04:14:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:43.220170 | orchestrator | 2026-01-01 04:14:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:43.220216 | orchestrator | 2026-01-01 04:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:46.263507 | orchestrator | 2026-01-01 04:14:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:46.266320 | orchestrator | 2026-01-01 04:14:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:46.266408 | orchestrator | 2026-01-01 04:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:49.316599 | orchestrator | 2026-01-01 04:14:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:49.319488 | orchestrator | 2026-01-01 04:14:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:49.319576 | orchestrator | 2026-01-01 04:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:52.370379 | orchestrator | 2026-01-01 04:14:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:52.373021 | orchestrator | 2026-01-01 04:14:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:52.373172 | orchestrator | 2026-01-01 04:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:55.426723 | orchestrator | 2026-01-01 04:14:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:55.429228 | orchestrator | 2026-01-01 04:14:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:55.429327 | orchestrator | 2026-01-01 04:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:58.482489 | orchestrator | 2026-01-01 04:14:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:14:58.482942 | orchestrator | 2026-01-01 04:14:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:14:58.483118 | orchestrator | 2026-01-01 04:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:01.524590 | orchestrator | 2026-01-01 04:15:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:01.526433 | orchestrator | 2026-01-01 04:15:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:01.526502 | orchestrator | 2026-01-01 04:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:04.578599 | orchestrator | 2026-01-01 04:15:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:04.583701 | orchestrator | 2026-01-01 04:15:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:04.583765 | orchestrator | 2026-01-01 04:15:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:07.624912 | orchestrator | 2026-01-01 04:15:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:07.626494 | orchestrator | 2026-01-01 04:15:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:07.626602 | orchestrator | 2026-01-01 04:15:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:10.677826 | orchestrator | 2026-01-01 04:15:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:10.679946 | orchestrator | 2026-01-01 04:15:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:10.679977 | orchestrator | 2026-01-01 04:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:13.728649 | orchestrator | 2026-01-01 04:15:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:13.731117 | orchestrator | 2026-01-01 04:15:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:13.731160 | orchestrator | 2026-01-01 04:15:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:16.775051 | orchestrator | 2026-01-01 04:15:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:16.778396 | orchestrator | 2026-01-01 04:15:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:16.778487 | orchestrator | 2026-01-01 04:15:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:19.826855 | orchestrator | 2026-01-01 04:15:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:19.829880 | orchestrator | 2026-01-01 04:15:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:19.829939 | orchestrator | 2026-01-01 04:15:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:22.875232 | orchestrator | 2026-01-01 04:15:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:22.877378 | orchestrator | 2026-01-01 04:15:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:22.877981 | orchestrator | 2026-01-01 04:15:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:25.923783 | orchestrator | 2026-01-01 04:15:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:25.924495 | orchestrator | 2026-01-01 04:15:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:25.924530 | orchestrator | 2026-01-01 04:15:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:28.974910 | orchestrator | 2026-01-01 04:15:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:28.976669 | orchestrator | 2026-01-01 04:15:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:28.976765 | orchestrator | 2026-01-01 04:15:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:32.025514 | orchestrator | 2026-01-01 04:15:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:32.026465 | orchestrator | 2026-01-01 04:15:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:32.026485 | orchestrator | 2026-01-01 04:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:35.078609 | orchestrator | 2026-01-01 04:15:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:35.080969 | orchestrator | 2026-01-01 04:15:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:35.080998 | orchestrator | 2026-01-01 04:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:38.114888 | orchestrator | 2026-01-01 04:15:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:38.117153 | orchestrator | 2026-01-01 04:15:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:38.117543 | orchestrator | 2026-01-01 04:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:41.164564 | orchestrator | 2026-01-01 04:15:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:41.166014 | orchestrator | 2026-01-01 04:15:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:41.166138 | orchestrator | 2026-01-01 04:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:44.212770 | orchestrator | 2026-01-01 04:15:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:44.216220 | orchestrator | 2026-01-01 04:15:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:44.216319 | orchestrator | 2026-01-01 04:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:47.269629 | orchestrator | 2026-01-01 04:15:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:47.272019 | orchestrator | 2026-01-01 04:15:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:47.272084 | orchestrator | 2026-01-01 04:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:50.324196 | orchestrator | 2026-01-01 04:15:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:50.326726 | orchestrator | 2026-01-01 04:15:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:50.326772 | orchestrator | 2026-01-01 04:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:53.377120 | orchestrator | 2026-01-01 04:15:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:53.379084 | orchestrator | 2026-01-01 04:15:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:53.379136 | orchestrator | 2026-01-01 04:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:56.425101 | orchestrator | 2026-01-01 04:15:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:56.425939 | orchestrator | 2026-01-01 04:15:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:56.425972 | orchestrator | 2026-01-01 04:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:59.478630 | orchestrator | 2026-01-01 04:15:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:15:59.480187 | orchestrator | 2026-01-01 04:15:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:15:59.480294 | orchestrator | 2026-01-01 04:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:02.524832 | orchestrator | 2026-01-01 04:16:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:02.528401 | orchestrator | 2026-01-01 04:16:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:02.528461 | orchestrator | 2026-01-01 04:16:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:05.582887 | orchestrator | 2026-01-01 04:16:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:05.583818 | orchestrator | 2026-01-01 04:16:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:05.583950 | orchestrator | 2026-01-01 04:16:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:08.631229 | orchestrator | 2026-01-01 04:16:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:08.632985 | orchestrator | 2026-01-01 04:16:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:08.633033 | orchestrator | 2026-01-01 04:16:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:11.683732 | orchestrator | 2026-01-01 04:16:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:11.687286 | orchestrator | 2026-01-01 04:16:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:11.687323 | orchestrator | 2026-01-01 04:16:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:14.730487 | orchestrator | 2026-01-01 04:16:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:14.733500 | orchestrator | 2026-01-01 04:16:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:14.733529 | orchestrator | 2026-01-01 04:16:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:17.781221 | orchestrator | 2026-01-01 04:16:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:17.783013 | orchestrator | 2026-01-01 04:16:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:17.783095 | orchestrator | 2026-01-01 04:16:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:20.833829 | orchestrator | 2026-01-01 04:16:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:20.835573 | orchestrator | 2026-01-01 04:16:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:20.835612 | orchestrator | 2026-01-01 04:16:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:23.889115 | orchestrator | 2026-01-01 04:16:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:23.891772 | orchestrator | 2026-01-01 04:16:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:23.891832 | orchestrator | 2026-01-01 04:16:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:26.942859 | orchestrator | 2026-01-01 04:16:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:26.943687 | orchestrator | 2026-01-01 04:16:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:26.943722 | orchestrator | 2026-01-01 04:16:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:29.996159 | orchestrator | 2026-01-01 04:16:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:29.998496 | orchestrator | 2026-01-01 04:16:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:29.998637 | orchestrator | 2026-01-01 04:16:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:33.043226 | orchestrator | 2026-01-01 04:16:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:33.045122 | orchestrator | 2026-01-01 04:16:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:33.045166 | orchestrator | 2026-01-01 04:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:36.088574 | orchestrator | 2026-01-01 04:16:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:36.089631 | orchestrator | 2026-01-01 04:16:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:36.089736 | orchestrator | 2026-01-01 04:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:39.140737 | orchestrator | 2026-01-01 04:16:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:39.142566 | orchestrator | 2026-01-01 04:16:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:39.142600 | orchestrator | 2026-01-01 04:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:42.194939 | orchestrator | 2026-01-01 04:16:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:42.196141 | orchestrator | 2026-01-01 04:16:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:42.196173 | orchestrator | 2026-01-01 04:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:45.246212 | orchestrator | 2026-01-01 04:16:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:45.246815 | orchestrator | 2026-01-01 04:16:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:45.246848 | orchestrator | 2026-01-01 04:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:48.298475 | orchestrator | 2026-01-01 04:16:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:48.302429 | orchestrator | 2026-01-01 04:16:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:48.302486 | orchestrator | 2026-01-01 04:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:51.356650 | orchestrator | 2026-01-01 04:16:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:51.358507 | orchestrator | 2026-01-01 04:16:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:51.358621 | orchestrator | 2026-01-01 04:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:54.418419 | orchestrator | 2026-01-01 04:16:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:54.421329 | orchestrator | 2026-01-01 04:16:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:54.421495 | orchestrator | 2026-01-01 04:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:57.461901 | orchestrator | 2026-01-01 04:16:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:16:57.463292 | orchestrator | 2026-01-01 04:16:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:16:57.463397 | orchestrator | 2026-01-01 04:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:00.506610 | orchestrator | 2026-01-01 04:17:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:00.508300 | orchestrator | 2026-01-01 04:17:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:00.508316 | orchestrator | 2026-01-01 04:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:03.546080 | orchestrator | 2026-01-01 04:17:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:03.548287 | orchestrator | 2026-01-01 04:17:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:03.548322 | orchestrator | 2026-01-01 04:17:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:06.594353 | orchestrator | 2026-01-01 04:17:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:06.595801 | orchestrator | 2026-01-01 04:17:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:06.595833 | orchestrator | 2026-01-01 04:17:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:09.644734 | orchestrator | 2026-01-01 04:17:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:09.646494 | orchestrator | 2026-01-01 04:17:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:09.646544 | orchestrator | 2026-01-01 04:17:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:12.683531 | orchestrator | 2026-01-01 04:17:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:12.684292 | orchestrator | 2026-01-01 04:17:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:12.684329 | orchestrator | 2026-01-01 04:17:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:15.728832 | orchestrator | 2026-01-01 04:17:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:15.729540 | orchestrator | 2026-01-01 04:17:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:15.729591 | orchestrator | 2026-01-01 04:17:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:18.782607 | orchestrator | 2026-01-01 04:17:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:18.784332 | orchestrator | 2026-01-01 04:17:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:18.785040 | orchestrator | 2026-01-01 04:17:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:21.834987 | orchestrator | 2026-01-01 04:17:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:21.836126 | orchestrator | 2026-01-01 04:17:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:21.836184 | orchestrator | 2026-01-01 04:17:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:24.885415 | orchestrator | 2026-01-01 04:17:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:24.887372 | orchestrator | 2026-01-01 04:17:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:24.887408 | orchestrator | 2026-01-01 04:17:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:27.935782 | orchestrator | 2026-01-01 04:17:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:27.937396 | orchestrator | 2026-01-01 04:17:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:27.937495 | orchestrator | 2026-01-01 04:17:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:30.982664 | orchestrator | 2026-01-01 04:17:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:30.985244 | orchestrator | 2026-01-01 04:17:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:30.985372 | orchestrator | 2026-01-01 04:17:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:34.023041 | orchestrator | 2026-01-01 04:17:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:34.025992 | orchestrator | 2026-01-01 04:17:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:34.026094 | orchestrator | 2026-01-01 04:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:37.066107 | orchestrator | 2026-01-01 04:17:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:37.067184 | orchestrator | 2026-01-01 04:17:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:37.067218 | orchestrator | 2026-01-01 04:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:40.120503 | orchestrator | 2026-01-01 04:17:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:40.120965 | orchestrator | 2026-01-01 04:17:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:40.120998 | orchestrator | 2026-01-01 04:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:43.162474 | orchestrator | 2026-01-01 04:17:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:43.164248 | orchestrator | 2026-01-01 04:17:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:43.164308 | orchestrator | 2026-01-01 04:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:46.216709 | orchestrator | 2026-01-01 04:17:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:46.219165 | orchestrator | 2026-01-01 04:17:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:46.219232 | orchestrator | 2026-01-01 04:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:49.256133 | orchestrator | 2026-01-01 04:17:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:49.257124 | orchestrator | 2026-01-01 04:17:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:49.257149 | orchestrator | 2026-01-01 04:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:52.307623 | orchestrator | 2026-01-01 04:17:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:52.310067 | orchestrator | 2026-01-01 04:17:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:52.310174 | orchestrator | 2026-01-01 04:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:55.360492 | orchestrator | 2026-01-01 04:17:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:55.362569 | orchestrator | 2026-01-01 04:17:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:55.363390 | orchestrator | 2026-01-01 04:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:58.415047 | orchestrator | 2026-01-01 04:17:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:17:58.417564 | orchestrator | 2026-01-01 04:17:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:17:58.417651 | orchestrator | 2026-01-01 04:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:01.477139 | orchestrator | 2026-01-01 04:18:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:01.479546 | orchestrator | 2026-01-01 04:18:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:01.479584 | orchestrator | 2026-01-01 04:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:04.534192 | orchestrator | 2026-01-01 04:18:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:04.538216 | orchestrator | 2026-01-01 04:18:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:04.538281 | orchestrator | 2026-01-01 04:18:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:07.582399 | orchestrator | 2026-01-01 04:18:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:07.584612 | orchestrator | 2026-01-01 04:18:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:07.584782 | orchestrator | 2026-01-01 04:18:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:10.630775 | orchestrator | 2026-01-01 04:18:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:10.632595 | orchestrator | 2026-01-01 04:18:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:10.632629 | orchestrator | 2026-01-01 04:18:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:13.682935 | orchestrator | 2026-01-01 04:18:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:13.685606 | orchestrator | 2026-01-01 04:18:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:13.685736 | orchestrator | 2026-01-01 04:18:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:16.739362 | orchestrator | 2026-01-01 04:18:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:16.741536 | orchestrator | 2026-01-01 04:18:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:16.741571 | orchestrator | 2026-01-01 04:18:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:19.791633 | orchestrator | 2026-01-01 04:18:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:19.793324 | orchestrator | 2026-01-01 04:18:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:19.793373 | orchestrator | 2026-01-01 04:18:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:22.845238 | orchestrator | 2026-01-01 04:18:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:22.846178 | orchestrator | 2026-01-01 04:18:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:22.846220 | orchestrator | 2026-01-01 04:18:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:25.904011 | orchestrator | 2026-01-01 04:18:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:25.906974 | orchestrator | 2026-01-01 04:18:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:25.907012 | orchestrator | 2026-01-01 04:18:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:28.947496 | orchestrator | 2026-01-01 04:18:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:28.948068 | orchestrator | 2026-01-01 04:18:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:28.948102 | orchestrator | 2026-01-01 04:18:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:31.995918 | orchestrator | 2026-01-01 04:18:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:31.997347 | orchestrator | 2026-01-01 04:18:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:31.997381 | orchestrator | 2026-01-01 04:18:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:35.049341 | orchestrator | 2026-01-01 04:18:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:35.052712 | orchestrator | 2026-01-01 04:18:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:35.053184 | orchestrator | 2026-01-01 04:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:38.096195 | orchestrator | 2026-01-01 04:18:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:38.098306 | orchestrator | 2026-01-01 04:18:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:38.098353 | orchestrator | 2026-01-01 04:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:41.154547 | orchestrator | 2026-01-01 04:18:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:41.156565 | orchestrator | 2026-01-01 04:18:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:41.156600 | orchestrator | 2026-01-01 04:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:44.200687 | orchestrator | 2026-01-01 04:18:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:44.202679 | orchestrator | 2026-01-01 04:18:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:44.202717 | orchestrator | 2026-01-01 04:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:47.246249 | orchestrator | 2026-01-01 04:18:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:47.246609 | orchestrator | 2026-01-01 04:18:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:47.246637 | orchestrator | 2026-01-01 04:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:50.296562 | orchestrator | 2026-01-01 04:18:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:50.297707 | orchestrator | 2026-01-01 04:18:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:50.297886 | orchestrator | 2026-01-01 04:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:53.352710 | orchestrator | 2026-01-01 04:18:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:53.356091 | orchestrator | 2026-01-01 04:18:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:53.356127 | orchestrator | 2026-01-01 04:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:56.407856 | orchestrator | 2026-01-01 04:18:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:56.409151 | orchestrator | 2026-01-01 04:18:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:56.409189 | orchestrator | 2026-01-01 04:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:59.455043 | orchestrator | 2026-01-01 04:18:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:18:59.456960 | orchestrator | 2026-01-01 04:18:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:18:59.457053 | orchestrator | 2026-01-01 04:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:02.501982 | orchestrator | 2026-01-01 04:19:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:02.504230 | orchestrator | 2026-01-01 04:19:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:02.504736 | orchestrator | 2026-01-01 04:19:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:05.559104 | orchestrator | 2026-01-01 04:19:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:05.560373 | orchestrator | 2026-01-01 04:19:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:05.560406 | orchestrator | 2026-01-01 04:19:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:08.603937 | orchestrator | 2026-01-01 04:19:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:08.605088 | orchestrator | 2026-01-01 04:19:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:08.605111 | orchestrator | 2026-01-01 04:19:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:11.654649 | orchestrator | 2026-01-01 04:19:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:11.655597 | orchestrator | 2026-01-01 04:19:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:11.655649 | orchestrator | 2026-01-01 04:19:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:14.702685 | orchestrator | 2026-01-01 04:19:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:14.704421 | orchestrator | 2026-01-01 04:19:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:14.704487 | orchestrator | 2026-01-01 04:19:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:17.759372 | orchestrator | 2026-01-01 04:19:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:17.761585 | orchestrator | 2026-01-01 04:19:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:17.761626 | orchestrator | 2026-01-01 04:19:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:20.810346 | orchestrator | 2026-01-01 04:19:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:20.811784 | orchestrator | 2026-01-01 04:19:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:20.811816 | orchestrator | 2026-01-01 04:19:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:23.855541 | orchestrator | 2026-01-01 04:19:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:23.857015 | orchestrator | 2026-01-01 04:19:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:23.857050 | orchestrator | 2026-01-01 04:19:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:26.911560 | orchestrator | 2026-01-01 04:19:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:26.913619 | orchestrator | 2026-01-01 04:19:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:26.913653 | orchestrator | 2026-01-01 04:19:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:29.962733 | orchestrator | 2026-01-01 04:19:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:29.963338 | orchestrator | 2026-01-01 04:19:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:29.963360 | orchestrator | 2026-01-01 04:19:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:33.021547 | orchestrator | 2026-01-01 04:19:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:33.024013 | orchestrator | 2026-01-01 04:19:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:33.024080 | orchestrator | 2026-01-01 04:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:36.064911 | orchestrator | 2026-01-01 04:19:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:36.067913 | orchestrator | 2026-01-01 04:19:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:36.067952 | orchestrator | 2026-01-01 04:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:39.112684 | orchestrator | 2026-01-01 04:19:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:39.114150 | orchestrator | 2026-01-01 04:19:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:39.114184 | orchestrator | 2026-01-01 04:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:42.168631 | orchestrator | 2026-01-01 04:19:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:42.170546 | orchestrator | 2026-01-01 04:19:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:42.170583 | orchestrator | 2026-01-01 04:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:45.219845 | orchestrator | 2026-01-01 04:19:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:45.222638 | orchestrator | 2026-01-01 04:19:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:45.222693 | orchestrator | 2026-01-01 04:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:48.273446 | orchestrator | 2026-01-01 04:19:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:48.275405 | orchestrator | 2026-01-01 04:19:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:48.275471 | orchestrator | 2026-01-01 04:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:51.320991 | orchestrator | 2026-01-01 04:19:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:51.323484 | orchestrator | 2026-01-01 04:19:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:51.323524 | orchestrator | 2026-01-01 04:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:54.370980 | orchestrator | 2026-01-01 04:19:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:54.373016 | orchestrator | 2026-01-01 04:19:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:54.373041 | orchestrator | 2026-01-01 04:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:57.425685 | orchestrator | 2026-01-01 04:19:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:19:57.427927 | orchestrator | 2026-01-01 04:19:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:19:57.427963 | orchestrator | 2026-01-01 04:19:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:00.471873 | orchestrator | 2026-01-01 04:20:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:00.473697 | orchestrator | 2026-01-01 04:20:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:00.473731 | orchestrator | 2026-01-01 04:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:03.521717 | orchestrator | 2026-01-01 04:20:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:03.523245 | orchestrator | 2026-01-01 04:20:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:03.523599 | orchestrator | 2026-01-01 04:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:06.571791 | orchestrator | 2026-01-01 04:20:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:06.574682 | orchestrator | 2026-01-01 04:20:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:06.574764 | orchestrator | 2026-01-01 04:20:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:09.621649 | orchestrator | 2026-01-01 04:20:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:09.623610 | orchestrator | 2026-01-01 04:20:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:09.623651 | orchestrator | 2026-01-01 04:20:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:12.671005 | orchestrator | 2026-01-01 04:20:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:12.672257 | orchestrator | 2026-01-01 04:20:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:12.672332 | orchestrator | 2026-01-01 04:20:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:15.716217 | orchestrator | 2026-01-01 04:20:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:15.716755 | orchestrator | 2026-01-01 04:20:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:15.716796 | orchestrator | 2026-01-01 04:20:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:18.762925 | orchestrator | 2026-01-01 04:20:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:18.764849 | orchestrator | 2026-01-01 04:20:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:18.764887 | orchestrator | 2026-01-01 04:20:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:21.803804 | orchestrator | 2026-01-01 04:20:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:21.806384 | orchestrator | 2026-01-01 04:20:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:21.806433 | orchestrator | 2026-01-01 04:20:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:24.850491 | orchestrator | 2026-01-01 04:20:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:24.852527 | orchestrator | 2026-01-01 04:20:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:24.852559 | orchestrator | 2026-01-01 04:20:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:27.904918 | orchestrator | 2026-01-01 04:20:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:27.907302 | orchestrator | 2026-01-01 04:20:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:27.907369 | orchestrator | 2026-01-01 04:20:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:30.957490 | orchestrator | 2026-01-01 04:20:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:30.958770 | orchestrator | 2026-01-01 04:20:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:30.958810 | orchestrator | 2026-01-01 04:20:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:34.008145 | orchestrator | 2026-01-01 04:20:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:34.009568 | orchestrator | 2026-01-01 04:20:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:34.009635 | orchestrator | 2026-01-01 04:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:37.049363 | orchestrator | 2026-01-01 04:20:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:37.050515 | orchestrator | 2026-01-01 04:20:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:37.050536 | orchestrator | 2026-01-01 04:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:40.101568 | orchestrator | 2026-01-01 04:20:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:40.103717 | orchestrator | 2026-01-01 04:20:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:40.103796 | orchestrator | 2026-01-01 04:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:43.154985 | orchestrator | 2026-01-01 04:20:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:43.157119 | orchestrator | 2026-01-01 04:20:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:43.157176 | orchestrator | 2026-01-01 04:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:46.210144 | orchestrator | 2026-01-01 04:20:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:46.211205 | orchestrator | 2026-01-01 04:20:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:46.211540 | orchestrator | 2026-01-01 04:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:49.252420 | orchestrator | 2026-01-01 04:20:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:49.254259 | orchestrator | 2026-01-01 04:20:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:49.254325 | orchestrator | 2026-01-01 04:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:52.296682 | orchestrator | 2026-01-01 04:20:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:52.299199 | orchestrator | 2026-01-01 04:20:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:52.299889 | orchestrator | 2026-01-01 04:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:55.344695 | orchestrator | 2026-01-01 04:20:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:55.355914 | orchestrator | 2026-01-01 04:20:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:55.355973 | orchestrator | 2026-01-01 04:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:58.392887 | orchestrator | 2026-01-01 04:20:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:20:58.393872 | orchestrator | 2026-01-01 04:20:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:20:58.394203 | orchestrator | 2026-01-01 04:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:01.441802 | orchestrator | 2026-01-01 04:21:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:01.443131 | orchestrator | 2026-01-01 04:21:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:01.443165 | orchestrator | 2026-01-01 04:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:04.500855 | orchestrator | 2026-01-01 04:21:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:04.503611 | orchestrator | 2026-01-01 04:21:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:04.503648 | orchestrator | 2026-01-01 04:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:07.546692 | orchestrator | 2026-01-01 04:21:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:07.548899 | orchestrator | 2026-01-01 04:21:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:07.549086 | orchestrator | 2026-01-01 04:21:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:10.595249 | orchestrator | 2026-01-01 04:21:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:10.596634 | orchestrator | 2026-01-01 04:21:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:10.596678 | orchestrator | 2026-01-01 04:21:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:13.636943 | orchestrator | 2026-01-01 04:21:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:13.638587 | orchestrator | 2026-01-01 04:21:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:13.638654 | orchestrator | 2026-01-01 04:21:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:16.688744 | orchestrator | 2026-01-01 04:21:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:16.690925 | orchestrator | 2026-01-01 04:21:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:16.690965 | orchestrator | 2026-01-01 04:21:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:19.743391 | orchestrator | 2026-01-01 04:21:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:19.745796 | orchestrator | 2026-01-01 04:21:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:19.745864 | orchestrator | 2026-01-01 04:21:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:22.799677 | orchestrator | 2026-01-01 04:21:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:22.801771 | orchestrator | 2026-01-01 04:21:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:22.801810 | orchestrator | 2026-01-01 04:21:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:25.849064 | orchestrator | 2026-01-01 04:21:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:25.852016 | orchestrator | 2026-01-01 04:21:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:25.852073 | orchestrator | 2026-01-01 04:21:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:28.898413 | orchestrator | 2026-01-01 04:21:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:28.899979 | orchestrator | 2026-01-01 04:21:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:28.900022 | orchestrator | 2026-01-01 04:21:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:31.952879 | orchestrator | 2026-01-01 04:21:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:31.954437 | orchestrator | 2026-01-01 04:21:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:31.954477 | orchestrator | 2026-01-01 04:21:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:34.999838 | orchestrator | 2026-01-01 04:21:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:35.001105 | orchestrator | 2026-01-01 04:21:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:35.001199 | orchestrator | 2026-01-01 04:21:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:38.044080 | orchestrator | 2026-01-01 04:21:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:38.046420 | orchestrator | 2026-01-01 04:21:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:38.046455 | orchestrator | 2026-01-01 04:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:41.087437 | orchestrator | 2026-01-01 04:21:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:41.088161 | orchestrator | 2026-01-01 04:21:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:41.088198 | orchestrator | 2026-01-01 04:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:44.130000 | orchestrator | 2026-01-01 04:21:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:44.132241 | orchestrator | 2026-01-01 04:21:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:44.132331 | orchestrator | 2026-01-01 04:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:47.180641 | orchestrator | 2026-01-01 04:21:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:47.181032 | orchestrator | 2026-01-01 04:21:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:47.181367 | orchestrator | 2026-01-01 04:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:50.228039 | orchestrator | 2026-01-01 04:21:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:50.229456 | orchestrator | 2026-01-01 04:21:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:50.229553 | orchestrator | 2026-01-01 04:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:53.272666 | orchestrator | 2026-01-01 04:21:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:53.275008 | orchestrator | 2026-01-01 04:21:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:53.275038 | orchestrator | 2026-01-01 04:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:56.319378 | orchestrator | 2026-01-01 04:21:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:56.322211 | orchestrator | 2026-01-01 04:21:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:56.322249 | orchestrator | 2026-01-01 04:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:59.368058 | orchestrator | 2026-01-01 04:21:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:21:59.371561 | orchestrator | 2026-01-01 04:21:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:21:59.371600 | orchestrator | 2026-01-01 04:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:02.419075 | orchestrator | 2026-01-01 04:22:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:02.420663 | orchestrator | 2026-01-01 04:22:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:02.420702 | orchestrator | 2026-01-01 04:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:05.464560 | orchestrator | 2026-01-01 04:22:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:05.464898 | orchestrator | 2026-01-01 04:22:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:05.464923 | orchestrator | 2026-01-01 04:22:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:08.509454 | orchestrator | 2026-01-01 04:22:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:08.511984 | orchestrator | 2026-01-01 04:22:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:08.512048 | orchestrator | 2026-01-01 04:22:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:11.558873 | orchestrator | 2026-01-01 04:22:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:11.561983 | orchestrator | 2026-01-01 04:22:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:11.562596 | orchestrator | 2026-01-01 04:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:14.606146 | orchestrator | 2026-01-01 04:22:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:14.607790 | orchestrator | 2026-01-01 04:22:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:14.607837 | orchestrator | 2026-01-01 04:22:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:17.660218 | orchestrator | 2026-01-01 04:22:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:17.661896 | orchestrator | 2026-01-01 04:22:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:17.661933 | orchestrator | 2026-01-01 04:22:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:20.705585 | orchestrator | 2026-01-01 04:22:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:20.706258 | orchestrator | 2026-01-01 04:22:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:20.706322 | orchestrator | 2026-01-01 04:22:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:23.749553 | orchestrator | 2026-01-01 04:22:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:23.750872 | orchestrator | 2026-01-01 04:22:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:23.750908 | orchestrator | 2026-01-01 04:22:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:26.797832 | orchestrator | 2026-01-01 04:22:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:26.799767 | orchestrator | 2026-01-01 04:22:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:26.799932 | orchestrator | 2026-01-01 04:22:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:29.848774 | orchestrator | 2026-01-01 04:22:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:29.851737 | orchestrator | 2026-01-01 04:22:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:29.851833 | orchestrator | 2026-01-01 04:22:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:32.892969 | orchestrator | 2026-01-01 04:22:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:32.894196 | orchestrator | 2026-01-01 04:22:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:32.894246 | orchestrator | 2026-01-01 04:22:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:35.938582 | orchestrator | 2026-01-01 04:22:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:35.940051 | orchestrator | 2026-01-01 04:22:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:35.940088 | orchestrator | 2026-01-01 04:22:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:38.990847 | orchestrator | 2026-01-01 04:22:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:38.992638 | orchestrator | 2026-01-01 04:22:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:38.992673 | orchestrator | 2026-01-01 04:22:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:42.052474 | orchestrator | 2026-01-01 04:22:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:42.054399 | orchestrator | 2026-01-01 04:22:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:42.054466 | orchestrator | 2026-01-01 04:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:45.098107 | orchestrator | 2026-01-01 04:22:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:45.099279 | orchestrator | 2026-01-01 04:22:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:45.099360 | orchestrator | 2026-01-01 04:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:48.145811 | orchestrator | 2026-01-01 04:22:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:48.146660 | orchestrator | 2026-01-01 04:22:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:48.146719 | orchestrator | 2026-01-01 04:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:51.199738 | orchestrator | 2026-01-01 04:22:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:51.201657 | orchestrator | 2026-01-01 04:22:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:51.201711 | orchestrator | 2026-01-01 04:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:54.260573 | orchestrator | 2026-01-01 04:22:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:54.264274 | orchestrator | 2026-01-01 04:22:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:54.264354 | orchestrator | 2026-01-01 04:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:57.310934 | orchestrator | 2026-01-01 04:22:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:22:57.312193 | orchestrator | 2026-01-01 04:22:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:22:57.312268 | orchestrator | 2026-01-01 04:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:00.359066 | orchestrator | 2026-01-01 04:23:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:00.361365 | orchestrator | 2026-01-01 04:23:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:00.361400 | orchestrator | 2026-01-01 04:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:03.409395 | orchestrator | 2026-01-01 04:23:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:03.412056 | orchestrator | 2026-01-01 04:23:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:03.412091 | orchestrator | 2026-01-01 04:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:06.458579 | orchestrator | 2026-01-01 04:23:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:06.460073 | orchestrator | 2026-01-01 04:23:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:06.460116 | orchestrator | 2026-01-01 04:23:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:09.508862 | orchestrator | 2026-01-01 04:23:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:09.512818 | orchestrator | 2026-01-01 04:23:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:09.512904 | orchestrator | 2026-01-01 04:23:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:12.560709 | orchestrator | 2026-01-01 04:23:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:12.566122 | orchestrator | 2026-01-01 04:23:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:12.566222 | orchestrator | 2026-01-01 04:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:15.602994 | orchestrator | 2026-01-01 04:23:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:15.605590 | orchestrator | 2026-01-01 04:23:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:15.605642 | orchestrator | 2026-01-01 04:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:18.664391 | orchestrator | 2026-01-01 04:23:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:18.666089 | orchestrator | 2026-01-01 04:23:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:18.666115 | orchestrator | 2026-01-01 04:23:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:21.706868 | orchestrator | 2026-01-01 04:23:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:21.708481 | orchestrator | 2026-01-01 04:23:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:21.708516 | orchestrator | 2026-01-01 04:23:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:24.743252 | orchestrator | 2026-01-01 04:23:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:24.744109 | orchestrator | 2026-01-01 04:23:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:24.744144 | orchestrator | 2026-01-01 04:23:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:27.784761 | orchestrator | 2026-01-01 04:23:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:27.785578 | orchestrator | 2026-01-01 04:23:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:27.785613 | orchestrator | 2026-01-01 04:23:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:30.840365 | orchestrator | 2026-01-01 04:23:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:30.842828 | orchestrator | 2026-01-01 04:23:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:30.842882 | orchestrator | 2026-01-01 04:23:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:33.902538 | orchestrator | 2026-01-01 04:23:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:33.905000 | orchestrator | 2026-01-01 04:23:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:33.905046 | orchestrator | 2026-01-01 04:23:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:36.946373 | orchestrator | 2026-01-01 04:23:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:36.947951 | orchestrator | 2026-01-01 04:23:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:36.947980 | orchestrator | 2026-01-01 04:23:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:39.983919 | orchestrator | 2026-01-01 04:23:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:39.985457 | orchestrator | 2026-01-01 04:23:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:39.985493 | orchestrator | 2026-01-01 04:23:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:43.031046 | orchestrator | 2026-01-01 04:23:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:43.031891 | orchestrator | 2026-01-01 04:23:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:43.031928 | orchestrator | 2026-01-01 04:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:46.073818 | orchestrator | 2026-01-01 04:23:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:46.075539 | orchestrator | 2026-01-01 04:23:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:46.075576 | orchestrator | 2026-01-01 04:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:49.119345 | orchestrator | 2026-01-01 04:23:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:49.120416 | orchestrator | 2026-01-01 04:23:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:49.120467 | orchestrator | 2026-01-01 04:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:52.164052 | orchestrator | 2026-01-01 04:23:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:52.165180 | orchestrator | 2026-01-01 04:23:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:52.165211 | orchestrator | 2026-01-01 04:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:55.206881 | orchestrator | 2026-01-01 04:23:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:55.208146 | orchestrator | 2026-01-01 04:23:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:55.208178 | orchestrator | 2026-01-01 04:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:58.258478 | orchestrator | 2026-01-01 04:23:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:23:58.261229 | orchestrator | 2026-01-01 04:23:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:23:58.261276 | orchestrator | 2026-01-01 04:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:01.308038 | orchestrator | 2026-01-01 04:24:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:01.310235 | orchestrator | 2026-01-01 04:24:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:01.310274 | orchestrator | 2026-01-01 04:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:04.360572 | orchestrator | 2026-01-01 04:24:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:04.363423 | orchestrator | 2026-01-01 04:24:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:04.363484 | orchestrator | 2026-01-01 04:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:07.404483 | orchestrator | 2026-01-01 04:24:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:07.406911 | orchestrator | 2026-01-01 04:24:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:07.406932 | orchestrator | 2026-01-01 04:24:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:10.455062 | orchestrator | 2026-01-01 04:24:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:10.456121 | orchestrator | 2026-01-01 04:24:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:10.456180 | orchestrator | 2026-01-01 04:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:13.499850 | orchestrator | 2026-01-01 04:24:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:13.500254 | orchestrator | 2026-01-01 04:24:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:13.500458 | orchestrator | 2026-01-01 04:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:16.548803 | orchestrator | 2026-01-01 04:24:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:16.551706 | orchestrator | 2026-01-01 04:24:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:16.551790 | orchestrator | 2026-01-01 04:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:19.603528 | orchestrator | 2026-01-01 04:24:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:19.605498 | orchestrator | 2026-01-01 04:24:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:19.605535 | orchestrator | 2026-01-01 04:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:22.648633 | orchestrator | 2026-01-01 04:24:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:22.650530 | orchestrator | 2026-01-01 04:24:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:22.650585 | orchestrator | 2026-01-01 04:24:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:25.703391 | orchestrator | 2026-01-01 04:24:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:25.704632 | orchestrator | 2026-01-01 04:24:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:25.704663 | orchestrator | 2026-01-01 04:24:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:28.754164 | orchestrator | 2026-01-01 04:24:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:28.755722 | orchestrator | 2026-01-01 04:24:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:28.755760 | orchestrator | 2026-01-01 04:24:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:31.799550 | orchestrator | 2026-01-01 04:24:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:31.800728 | orchestrator | 2026-01-01 04:24:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:31.800754 | orchestrator | 2026-01-01 04:24:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:34.852396 | orchestrator | 2026-01-01 04:24:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:34.854925 | orchestrator | 2026-01-01 04:24:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:34.854970 | orchestrator | 2026-01-01 04:24:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:37.905977 | orchestrator | 2026-01-01 04:24:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:37.907379 | orchestrator | 2026-01-01 04:24:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:37.908549 | orchestrator | 2026-01-01 04:24:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:40.953882 | orchestrator | 2026-01-01 04:24:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:40.954997 | orchestrator | 2026-01-01 04:24:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:40.955047 | orchestrator | 2026-01-01 04:24:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:44.005346 | orchestrator | 2026-01-01 04:24:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:44.006770 | orchestrator | 2026-01-01 04:24:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:44.006798 | orchestrator | 2026-01-01 04:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:47.051513 | orchestrator | 2026-01-01 04:24:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:47.052191 | orchestrator | 2026-01-01 04:24:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:47.052224 | orchestrator | 2026-01-01 04:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:50.104426 | orchestrator | 2026-01-01 04:24:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:50.105471 | orchestrator | 2026-01-01 04:24:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:50.105498 | orchestrator | 2026-01-01 04:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:53.141441 | orchestrator | 2026-01-01 04:24:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:53.143299 | orchestrator | 2026-01-01 04:24:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:53.143361 | orchestrator | 2026-01-01 04:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:56.192075 | orchestrator | 2026-01-01 04:24:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:56.193586 | orchestrator | 2026-01-01 04:24:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:56.193632 | orchestrator | 2026-01-01 04:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:59.234495 | orchestrator | 2026-01-01 04:24:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:24:59.235395 | orchestrator | 2026-01-01 04:24:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:24:59.235451 | orchestrator | 2026-01-01 04:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:02.287762 | orchestrator | 2026-01-01 04:25:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:02.289844 | orchestrator | 2026-01-01 04:25:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:02.289865 | orchestrator | 2026-01-01 04:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:05.345407 | orchestrator | 2026-01-01 04:25:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:05.347012 | orchestrator | 2026-01-01 04:25:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:05.347147 | orchestrator | 2026-01-01 04:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:08.390109 | orchestrator | 2026-01-01 04:25:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:08.391923 | orchestrator | 2026-01-01 04:25:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:08.392011 | orchestrator | 2026-01-01 04:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:11.440226 | orchestrator | 2026-01-01 04:25:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:11.441660 | orchestrator | 2026-01-01 04:25:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:11.441694 | orchestrator | 2026-01-01 04:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:14.489735 | orchestrator | 2026-01-01 04:25:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:14.493027 | orchestrator | 2026-01-01 04:25:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:14.493071 | orchestrator | 2026-01-01 04:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:17.539883 | orchestrator | 2026-01-01 04:25:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:17.541438 | orchestrator | 2026-01-01 04:25:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:17.541537 | orchestrator | 2026-01-01 04:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:20.593129 | orchestrator | 2026-01-01 04:25:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:20.593937 | orchestrator | 2026-01-01 04:25:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:20.593969 | orchestrator | 2026-01-01 04:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:23.636869 | orchestrator | 2026-01-01 04:25:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:23.638651 | orchestrator | 2026-01-01 04:25:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:23.638723 | orchestrator | 2026-01-01 04:25:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:26.683250 | orchestrator | 2026-01-01 04:25:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:26.685052 | orchestrator | 2026-01-01 04:25:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:26.685218 | orchestrator | 2026-01-01 04:25:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:29.734698 | orchestrator | 2026-01-01 04:25:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:29.736503 | orchestrator | 2026-01-01 04:25:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:29.736544 | orchestrator | 2026-01-01 04:25:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:32.782647 | orchestrator | 2026-01-01 04:25:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:32.783946 | orchestrator | 2026-01-01 04:25:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:32.784089 | orchestrator | 2026-01-01 04:25:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:35.834502 | orchestrator | 2026-01-01 04:25:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:35.836988 | orchestrator | 2026-01-01 04:25:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:35.837020 | orchestrator | 2026-01-01 04:25:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:38.882306 | orchestrator | 2026-01-01 04:25:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:38.883077 | orchestrator | 2026-01-01 04:25:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:38.883141 | orchestrator | 2026-01-01 04:25:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:41.937744 | orchestrator | 2026-01-01 04:25:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:41.938662 | orchestrator | 2026-01-01 04:25:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:41.938794 | orchestrator | 2026-01-01 04:25:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:44.988667 | orchestrator | 2026-01-01 04:25:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:44.990008 | orchestrator | 2026-01-01 04:25:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:44.990083 | orchestrator | 2026-01-01 04:25:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:48.044857 | orchestrator | 2026-01-01 04:25:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:48.046692 | orchestrator | 2026-01-01 04:25:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:48.046729 | orchestrator | 2026-01-01 04:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:51.104304 | orchestrator | 2026-01-01 04:25:51 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:51.107061 | orchestrator | 2026-01-01 04:25:51 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:51.107087 | orchestrator | 2026-01-01 04:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:54.151703 | orchestrator | 2026-01-01 04:25:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:54.152427 | orchestrator | 2026-01-01 04:25:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:54.152463 | orchestrator | 2026-01-01 04:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:57.199942 | orchestrator | 2026-01-01 04:25:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:25:57.200821 | orchestrator | 2026-01-01 04:25:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:25:57.200844 | orchestrator | 2026-01-01 04:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:00.249714 | orchestrator | 2026-01-01 04:26:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:00.250418 | orchestrator | 2026-01-01 04:26:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:00.250540 | orchestrator | 2026-01-01 04:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:03.298114 | orchestrator | 2026-01-01 04:26:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:03.300077 | orchestrator | 2026-01-01 04:26:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:03.300451 | orchestrator | 2026-01-01 04:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:06.340800 | orchestrator | 2026-01-01 04:26:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:06.341671 | orchestrator | 2026-01-01 04:26:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:06.341797 | orchestrator | 2026-01-01 04:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:09.385450 | orchestrator | 2026-01-01 04:26:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:09.387614 | orchestrator | 2026-01-01 04:26:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:09.387753 | orchestrator | 2026-01-01 04:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:12.434301 | orchestrator | 2026-01-01 04:26:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:12.437055 | orchestrator | 2026-01-01 04:26:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:12.437073 | orchestrator | 2026-01-01 04:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:15.482075 | orchestrator | 2026-01-01 04:26:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:15.482290 | orchestrator | 2026-01-01 04:26:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:15.482313 | orchestrator | 2026-01-01 04:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:18.532253 | orchestrator | 2026-01-01 04:26:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:18.534216 | orchestrator | 2026-01-01 04:26:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:18.534301 | orchestrator | 2026-01-01 04:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:21.577980 | orchestrator | 2026-01-01 04:26:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:21.580440 | orchestrator | 2026-01-01 04:26:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:21.580478 | orchestrator | 2026-01-01 04:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:24.618735 | orchestrator | 2026-01-01 04:26:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:24.620103 | orchestrator | 2026-01-01 04:26:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:24.620136 | orchestrator | 2026-01-01 04:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:27.672027 | orchestrator | 2026-01-01 04:26:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:27.673280 | orchestrator | 2026-01-01 04:26:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:27.673308 | orchestrator | 2026-01-01 04:26:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:30.717507 | orchestrator | 2026-01-01 04:26:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:30.717848 | orchestrator | 2026-01-01 04:26:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:30.717879 | orchestrator | 2026-01-01 04:26:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:33.763984 | orchestrator | 2026-01-01 04:26:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:33.765575 | orchestrator | 2026-01-01 04:26:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:33.765622 | orchestrator | 2026-01-01 04:26:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:36.816748 | orchestrator | 2026-01-01 04:26:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:36.819652 | orchestrator | 2026-01-01 04:26:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:36.819752 | orchestrator | 2026-01-01 04:26:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:39.869145 | orchestrator | 2026-01-01 04:26:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:39.871073 | orchestrator | 2026-01-01 04:26:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:39.871111 | orchestrator | 2026-01-01 04:26:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:42.917604 | orchestrator | 2026-01-01 04:26:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:42.919475 | orchestrator | 2026-01-01 04:26:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:42.919511 | orchestrator | 2026-01-01 04:26:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:45.963439 | orchestrator | 2026-01-01 04:26:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:45.964438 | orchestrator | 2026-01-01 04:26:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:45.964475 | orchestrator | 2026-01-01 04:26:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:49.012977 | orchestrator | 2026-01-01 04:26:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:49.014289 | orchestrator | 2026-01-01 04:26:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:49.014381 | orchestrator | 2026-01-01 04:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:52.059057 | orchestrator | 2026-01-01 04:26:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:52.060303 | orchestrator | 2026-01-01 04:26:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:52.060404 | orchestrator | 2026-01-01 04:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:55.103795 | orchestrator | 2026-01-01 04:26:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:55.104666 | orchestrator | 2026-01-01 04:26:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:55.104711 | orchestrator | 2026-01-01 04:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:58.145569 | orchestrator | 2026-01-01 04:26:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:26:58.146117 | orchestrator | 2026-01-01 04:26:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:26:58.146163 | orchestrator | 2026-01-01 04:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:01.198254 | orchestrator | 2026-01-01 04:27:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:01.200225 | orchestrator | 2026-01-01 04:27:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:01.200269 | orchestrator | 2026-01-01 04:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:04.246014 | orchestrator | 2026-01-01 04:27:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:04.248529 | orchestrator | 2026-01-01 04:27:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:04.248563 | orchestrator | 2026-01-01 04:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:07.296919 | orchestrator | 2026-01-01 04:27:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:07.299589 | orchestrator | 2026-01-01 04:27:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:07.299696 | orchestrator | 2026-01-01 04:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:10.344566 | orchestrator | 2026-01-01 04:27:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:10.345890 | orchestrator | 2026-01-01 04:27:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:10.345950 | orchestrator | 2026-01-01 04:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:13.396202 | orchestrator | 2026-01-01 04:27:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:13.397254 | orchestrator | 2026-01-01 04:27:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:13.397292 | orchestrator | 2026-01-01 04:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:16.437476 | orchestrator | 2026-01-01 04:27:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:16.441261 | orchestrator | 2026-01-01 04:27:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:16.441402 | orchestrator | 2026-01-01 04:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:19.496889 | orchestrator | 2026-01-01 04:27:19 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:19.498314 | orchestrator | 2026-01-01 04:27:19 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:19.498415 | orchestrator | 2026-01-01 04:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:22.538822 | orchestrator | 2026-01-01 04:27:22 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:22.539950 | orchestrator | 2026-01-01 04:27:22 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:22.539985 | orchestrator | 2026-01-01 04:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:25.589092 | orchestrator | 2026-01-01 04:27:25 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:25.590868 | orchestrator | 2026-01-01 04:27:25 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:25.590903 | orchestrator | 2026-01-01 04:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:28.635628 | orchestrator | 2026-01-01 04:27:28 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:28.638314 | orchestrator | 2026-01-01 04:27:28 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:28.638386 | orchestrator | 2026-01-01 04:27:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:31.687263 | orchestrator | 2026-01-01 04:27:31 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:31.689519 | orchestrator | 2026-01-01 04:27:31 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:31.689555 | orchestrator | 2026-01-01 04:27:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:34.741495 | orchestrator | 2026-01-01 04:27:34 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:34.743134 | orchestrator | 2026-01-01 04:27:34 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:34.743203 | orchestrator | 2026-01-01 04:27:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:37.784522 | orchestrator | 2026-01-01 04:27:37 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:37.785636 | orchestrator | 2026-01-01 04:27:37 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:37.785686 | orchestrator | 2026-01-01 04:27:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:40.836928 | orchestrator | 2026-01-01 04:27:40 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:40.839511 | orchestrator | 2026-01-01 04:27:40 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:40.839547 | orchestrator | 2026-01-01 04:27:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:43.897887 | orchestrator | 2026-01-01 04:27:43 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:43.899703 | orchestrator | 2026-01-01 04:27:43 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:43.899750 | orchestrator | 2026-01-01 04:27:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:46.948324 | orchestrator | 2026-01-01 04:27:46 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:46.949566 | orchestrator | 2026-01-01 04:27:46 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:46.949622 | orchestrator | 2026-01-01 04:27:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:49.995931 | orchestrator | 2026-01-01 04:27:49 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:49.997730 | orchestrator | 2026-01-01 04:27:49 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:49.997760 | orchestrator | 2026-01-01 04:27:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:53.046323 | orchestrator | 2026-01-01 04:27:53 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:53.047946 | orchestrator | 2026-01-01 04:27:53 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:53.048034 | orchestrator | 2026-01-01 04:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:56.104176 | orchestrator | 2026-01-01 04:27:56 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:56.105654 | orchestrator | 2026-01-01 04:27:56 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:56.105690 | orchestrator | 2026-01-01 04:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:59.153391 | orchestrator | 2026-01-01 04:27:59 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:27:59.155046 | orchestrator | 2026-01-01 04:27:59 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:27:59.155128 | orchestrator | 2026-01-01 04:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:02.192144 | orchestrator | 2026-01-01 04:28:02 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:02.195573 | orchestrator | 2026-01-01 04:28:02 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:02.195612 | orchestrator | 2026-01-01 04:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:05.252743 | orchestrator | 2026-01-01 04:28:05 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:05.254257 | orchestrator | 2026-01-01 04:28:05 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:05.254538 | orchestrator | 2026-01-01 04:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:08.306981 | orchestrator | 2026-01-01 04:28:08 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:08.309922 | orchestrator | 2026-01-01 04:28:08 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:08.310003 | orchestrator | 2026-01-01 04:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:11.348962 | orchestrator | 2026-01-01 04:28:11 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:11.349650 | orchestrator | 2026-01-01 04:28:11 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:11.349830 | orchestrator | 2026-01-01 04:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:14.405232 | orchestrator | 2026-01-01 04:28:14 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:14.408331 | orchestrator | 2026-01-01 04:28:14 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:14.408392 | orchestrator | 2026-01-01 04:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:17.454281 | orchestrator | 2026-01-01 04:28:17 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:17.455882 | orchestrator | 2026-01-01 04:28:17 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:17.455949 | orchestrator | 2026-01-01 04:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:20.495924 | orchestrator | 2026-01-01 04:28:20 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:20.496496 | orchestrator | 2026-01-01 04:28:20 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:20.496524 | orchestrator | 2026-01-01 04:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:23.552427 | orchestrator | 2026-01-01 04:28:23 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:23.552510 | orchestrator | 2026-01-01 04:28:23 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:23.552523 | orchestrator | 2026-01-01 04:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:26.597222 | orchestrator | 2026-01-01 04:28:26 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:26.597684 | orchestrator | 2026-01-01 04:28:26 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:26.597716 | orchestrator | 2026-01-01 04:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:29.646001 | orchestrator | 2026-01-01 04:28:29 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:29.648108 | orchestrator | 2026-01-01 04:28:29 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:29.648152 | orchestrator | 2026-01-01 04:28:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:32.691918 | orchestrator | 2026-01-01 04:28:32 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:32.693395 | orchestrator | 2026-01-01 04:28:32 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:32.693748 | orchestrator | 2026-01-01 04:28:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:35.741557 | orchestrator | 2026-01-01 04:28:35 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:35.744812 | orchestrator | 2026-01-01 04:28:35 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:35.744924 | orchestrator | 2026-01-01 04:28:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:38.791902 | orchestrator | 2026-01-01 04:28:38 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:38.794175 | orchestrator | 2026-01-01 04:28:38 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:38.794266 | orchestrator | 2026-01-01 04:28:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:41.835475 | orchestrator | 2026-01-01 04:28:41 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:41.835968 | orchestrator | 2026-01-01 04:28:41 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:41.835999 | orchestrator | 2026-01-01 04:28:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:44.883688 | orchestrator | 2026-01-01 04:28:44 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:44.885117 | orchestrator | 2026-01-01 04:28:44 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:44.885167 | orchestrator | 2026-01-01 04:28:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:47.932769 | orchestrator | 2026-01-01 04:28:47 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:47.935007 | orchestrator | 2026-01-01 04:28:47 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:47.935091 | orchestrator | 2026-01-01 04:28:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:50.986749 | orchestrator | 2026-01-01 04:28:50 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:50.988287 | orchestrator | 2026-01-01 04:28:50 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:50.988322 | orchestrator | 2026-01-01 04:28:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:54.038242 | orchestrator | 2026-01-01 04:28:54 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:54.040595 | orchestrator | 2026-01-01 04:28:54 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:54.040660 | orchestrator | 2026-01-01 04:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:57.079163 | orchestrator | 2026-01-01 04:28:57 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:28:57.081713 | orchestrator | 2026-01-01 04:28:57 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:28:57.081749 | orchestrator | 2026-01-01 04:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:00.131669 | orchestrator | 2026-01-01 04:29:00 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:00.137541 | orchestrator | 2026-01-01 04:29:00 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:00.138247 | orchestrator | 2026-01-01 04:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:03.182110 | orchestrator | 2026-01-01 04:29:03 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:03.184743 | orchestrator | 2026-01-01 04:29:03 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:03.184905 | orchestrator | 2026-01-01 04:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:06.234746 | orchestrator | 2026-01-01 04:29:06 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:06.237894 | orchestrator | 2026-01-01 04:29:06 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:06.237968 | orchestrator | 2026-01-01 04:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:09.289939 | orchestrator | 2026-01-01 04:29:09 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:09.291688 | orchestrator | 2026-01-01 04:29:09 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:09.291723 | orchestrator | 2026-01-01 04:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:12.342136 | orchestrator | 2026-01-01 04:29:12 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:12.343510 | orchestrator | 2026-01-01 04:29:12 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:12.343589 | orchestrator | 2026-01-01 04:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:15.396261 | orchestrator | 2026-01-01 04:29:15 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:15.398175 | orchestrator | 2026-01-01 04:29:15 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:15.398245 | orchestrator | 2026-01-01 04:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:18.458103 | orchestrator | 2026-01-01 04:29:18 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:18.460032 | orchestrator | 2026-01-01 04:29:18 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:18.460065 | orchestrator | 2026-01-01 04:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:21.517167 | orchestrator | 2026-01-01 04:29:21 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:21.519965 | orchestrator | 2026-01-01 04:29:21 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:21.520026 | orchestrator | 2026-01-01 04:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:24.569090 | orchestrator | 2026-01-01 04:29:24 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:24.572347 | orchestrator | 2026-01-01 04:29:24 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:24.572438 | orchestrator | 2026-01-01 04:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:27.624285 | orchestrator | 2026-01-01 04:29:27 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:27.626915 | orchestrator | 2026-01-01 04:29:27 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:27.627002 | orchestrator | 2026-01-01 04:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:30.675715 | orchestrator | 2026-01-01 04:29:30 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:30.676634 | orchestrator | 2026-01-01 04:29:30 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:30.676775 | orchestrator | 2026-01-01 04:29:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:33.725772 | orchestrator | 2026-01-01 04:29:33 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:33.726997 | orchestrator | 2026-01-01 04:29:33 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:33.727037 | orchestrator | 2026-01-01 04:29:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:36.775813 | orchestrator | 2026-01-01 04:29:36 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:36.777677 | orchestrator | 2026-01-01 04:29:36 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:36.777707 | orchestrator | 2026-01-01 04:29:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:39.830131 | orchestrator | 2026-01-01 04:29:39 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:39.832214 | orchestrator | 2026-01-01 04:29:39 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:39.832255 | orchestrator | 2026-01-01 04:29:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:42.879759 | orchestrator | 2026-01-01 04:29:42 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:42.882255 | orchestrator | 2026-01-01 04:29:42 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:42.882309 | orchestrator | 2026-01-01 04:29:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:45.928844 | orchestrator | 2026-01-01 04:29:45 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:45.931809 | orchestrator | 2026-01-01 04:29:45 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:45.931924 | orchestrator | 2026-01-01 04:29:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:48.979845 | orchestrator | 2026-01-01 04:29:48 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:48.982644 | orchestrator | 2026-01-01 04:29:48 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:48.982946 | orchestrator | 2026-01-01 04:29:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:52.055138 | orchestrator | 2026-01-01 04:29:52 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:52.057728 | orchestrator | 2026-01-01 04:29:52 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:52.057820 | orchestrator | 2026-01-01 04:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:55.105559 | orchestrator | 2026-01-01 04:29:55 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:55.108497 | orchestrator | 2026-01-01 04:29:55 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:55.108654 | orchestrator | 2026-01-01 04:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:58.156787 | orchestrator | 2026-01-01 04:29:58 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:29:58.158168 | orchestrator | 2026-01-01 04:29:58 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:29:58.158252 | orchestrator | 2026-01-01 04:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:01.209470 | orchestrator | 2026-01-01 04:30:01 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:30:01.211032 | orchestrator | 2026-01-01 04:30:01 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:30:01.211118 | orchestrator | 2026-01-01 04:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:04.261834 | orchestrator | 2026-01-01 04:30:04 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:30:04.263324 | orchestrator | 2026-01-01 04:30:04 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:30:04.263422 | orchestrator | 2026-01-01 04:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:07.306302 | orchestrator | 2026-01-01 04:30:07 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:30:07.307569 | orchestrator | 2026-01-01 04:30:07 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:30:07.307652 | orchestrator | 2026-01-01 04:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:10.350729 | orchestrator | 2026-01-01 04:30:10 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:30:10.351808 | orchestrator | 2026-01-01 04:30:10 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:30:10.352087 | orchestrator | 2026-01-01 04:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:13.403903 | orchestrator | 2026-01-01 04:30:13 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:30:13.404854 | orchestrator | 2026-01-01 04:30:13 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:30:13.405139 | orchestrator | 2026-01-01 04:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:16.456991 | orchestrator | 2026-01-01 04:30:16 | INFO  | Task e5169f14-1040-40f8-a263-f379e4d4ea2a is in state STARTED 2026-01-01 04:30:16.458810 | orchestrator | 2026-01-01 04:30:16 | INFO  | Task 3fcd444d-0d9d-4810-905b-dfca2755c858 is in state STARTED 2026-01-01 04:30:16.458873 | orchestrator | 2026-01-01 04:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:19.766189 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-01 04:30:19.768645 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-01 04:30:20.547807 | 2026-01-01 04:30:20.547977 | PLAY [Post output play] 2026-01-01 04:30:20.575611 | 2026-01-01 04:30:20.575837 | LOOP [stage-output : Register sources] 2026-01-01 04:30:20.639489 | 2026-01-01 04:30:20.639719 | TASK [stage-output : Check sudo] 2026-01-01 04:30:21.588092 | orchestrator | sudo: a password is required 2026-01-01 04:30:21.679379 | orchestrator | ok: Runtime: 0:00:00.014903 2026-01-01 04:30:21.693047 | 2026-01-01 04:30:21.693312 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-01 04:30:21.734495 | 2026-01-01 04:30:21.734899 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-01 04:30:21.814398 | orchestrator | ok 2026-01-01 04:30:21.824360 | 2026-01-01 04:30:21.824524 | LOOP [stage-output : Ensure target folders exist] 2026-01-01 04:30:22.281295 | orchestrator | ok: "docs" 2026-01-01 04:30:22.282215 | 2026-01-01 04:30:22.532095 | orchestrator | ok: "artifacts" 2026-01-01 04:30:22.780169 | orchestrator | ok: "logs" 2026-01-01 04:30:22.801116 | 2026-01-01 04:30:22.801309 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-01 04:30:22.836658 | 2026-01-01 04:30:22.836936 | TASK [stage-output : Make all log files readable] 2026-01-01 04:30:23.135619 | orchestrator | ok 2026-01-01 04:30:23.142495 | 2026-01-01 04:30:23.142620 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-01 04:30:23.178210 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:23.195002 | 2026-01-01 04:30:23.195190 | TASK [stage-output : Discover log files for compression] 2026-01-01 04:30:23.220306 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:23.231972 | 2026-01-01 04:30:23.232109 | LOOP [stage-output : Archive everything from logs] 2026-01-01 04:30:23.283746 | 2026-01-01 04:30:23.283916 | PLAY [Post cleanup play] 2026-01-01 04:30:23.292626 | 2026-01-01 04:30:23.292769 | TASK [Set cloud fact (Zuul deployment)] 2026-01-01 04:30:23.358558 | orchestrator | ok 2026-01-01 04:30:23.371608 | 2026-01-01 04:30:23.371751 | TASK [Set cloud fact (local deployment)] 2026-01-01 04:30:23.406679 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:23.421541 | 2026-01-01 04:30:23.421703 | TASK [Clean the cloud environment] 2026-01-01 04:30:24.834233 | orchestrator | 2026-01-01 04:30:24 - clean up servers 2026-01-01 04:30:25.793041 | orchestrator | 2026-01-01 04:30:25 - testbed-manager 2026-01-01 04:30:25.879918 | orchestrator | 2026-01-01 04:30:25 - testbed-node-2 2026-01-01 04:30:25.983788 | orchestrator | 2026-01-01 04:30:25 - testbed-node-5 2026-01-01 04:30:26.078990 | orchestrator | 2026-01-01 04:30:26 - testbed-node-0 2026-01-01 04:30:26.181445 | orchestrator | 2026-01-01 04:30:26 - testbed-node-4 2026-01-01 04:30:26.294422 | orchestrator | 2026-01-01 04:30:26 - testbed-node-1 2026-01-01 04:30:26.388840 | orchestrator | 2026-01-01 04:30:26 - testbed-node-3 2026-01-01 04:30:26.478879 | orchestrator | 2026-01-01 04:30:26 - clean up keypairs 2026-01-01 04:30:26.500872 | orchestrator | 2026-01-01 04:30:26 - testbed 2026-01-01 04:30:26.525909 | orchestrator | 2026-01-01 04:30:26 - wait for servers to be gone 2026-01-01 04:30:37.430850 | orchestrator | 2026-01-01 04:30:37 - clean up ports 2026-01-01 04:30:37.617822 | orchestrator | 2026-01-01 04:30:37 - 398689b0-2158-444d-a994-3479913ce599 2026-01-01 04:30:37.900922 | orchestrator | 2026-01-01 04:30:37 - 72c122b5-953c-42ed-9b3d-ef3284107294 2026-01-01 04:30:38.409028 | orchestrator | 2026-01-01 04:30:38 - 86a99773-648c-42e1-90ac-f0c7cca2b280 2026-01-01 04:30:38.639977 | orchestrator | 2026-01-01 04:30:38 - 8d20d08f-7961-4e1d-9b2c-0807676c1973 2026-01-01 04:30:38.852681 | orchestrator | 2026-01-01 04:30:38 - 9894bcfe-70e6-4d02-985a-3eaf0a5a9fb1 2026-01-01 04:30:39.074370 | orchestrator | 2026-01-01 04:30:39 - ea3a7a39-79fe-4a0b-89d3-34fa43a0bf1b 2026-01-01 04:30:39.281867 | orchestrator | 2026-01-01 04:30:39 - fb11ad80-3940-4f2a-86c9-8dfb52725a52 2026-01-01 04:30:39.499801 | orchestrator | 2026-01-01 04:30:39 - clean up volumes 2026-01-01 04:30:39.605763 | orchestrator | 2026-01-01 04:30:39 - testbed-volume-3-node-base 2026-01-01 04:30:39.646583 | orchestrator | 2026-01-01 04:30:39 - testbed-volume-5-node-base 2026-01-01 04:30:39.688535 | orchestrator | 2026-01-01 04:30:39 - testbed-volume-0-node-base 2026-01-01 04:30:39.733366 | orchestrator | 2026-01-01 04:30:39 - testbed-volume-4-node-base 2026-01-01 04:30:39.778499 | orchestrator | 2026-01-01 04:30:39 - testbed-volume-1-node-base 2026-01-01 04:30:39.828700 | orchestrator | 2026-01-01 04:30:39 - testbed-volume-2-node-base 2026-01-01 04:30:39.874548 | orchestrator | 2026-01-01 04:30:39 - testbed-volume-manager-base 2026-01-01 04:30:39.918673 | orchestrator | 2026-01-01 04:30:39 - testbed-volume-7-node-4 2026-01-01 04:30:39.963207 | orchestrator | 2026-01-01 04:30:39 - testbed-volume-2-node-5 2026-01-01 04:30:40.007283 | orchestrator | 2026-01-01 04:30:40 - testbed-volume-5-node-5 2026-01-01 04:30:40.052219 | orchestrator | 2026-01-01 04:30:40 - testbed-volume-8-node-5 2026-01-01 04:30:40.096298 | orchestrator | 2026-01-01 04:30:40 - testbed-volume-1-node-4 2026-01-01 04:30:40.140180 | orchestrator | 2026-01-01 04:30:40 - testbed-volume-4-node-4 2026-01-01 04:30:40.183883 | orchestrator | 2026-01-01 04:30:40 - testbed-volume-6-node-3 2026-01-01 04:30:40.225622 | orchestrator | 2026-01-01 04:30:40 - testbed-volume-3-node-3 2026-01-01 04:30:40.269616 | orchestrator | 2026-01-01 04:30:40 - testbed-volume-0-node-3 2026-01-01 04:30:40.312836 | orchestrator | 2026-01-01 04:30:40 - disconnect routers 2026-01-01 04:30:40.397952 | orchestrator | 2026-01-01 04:30:40 - testbed 2026-01-01 04:30:41.278609 | orchestrator | 2026-01-01 04:30:41 - clean up subnets 2026-01-01 04:30:41.348156 | orchestrator | 2026-01-01 04:30:41 - subnet-testbed-management 2026-01-01 04:30:41.512205 | orchestrator | 2026-01-01 04:30:41 - clean up networks 2026-01-01 04:30:41.680338 | orchestrator | 2026-01-01 04:30:41 - net-testbed-management 2026-01-01 04:30:42.001531 | orchestrator | 2026-01-01 04:30:42 - clean up security groups 2026-01-01 04:30:42.044826 | orchestrator | 2026-01-01 04:30:42 - testbed-management 2026-01-01 04:30:42.160341 | orchestrator | 2026-01-01 04:30:42 - testbed-node 2026-01-01 04:30:42.288187 | orchestrator | 2026-01-01 04:30:42 - clean up floating ips 2026-01-01 04:30:42.322526 | orchestrator | 2026-01-01 04:30:42 - 81.163.193.2 2026-01-01 04:30:42.693683 | orchestrator | 2026-01-01 04:30:42 - clean up routers 2026-01-01 04:30:42.801508 | orchestrator | 2026-01-01 04:30:42 - testbed 2026-01-01 04:30:43.980606 | orchestrator | ok: Runtime: 0:00:19.971530 2026-01-01 04:30:43.982515 | 2026-01-01 04:30:43.982603 | PLAY RECAP 2026-01-01 04:30:43.982660 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-01 04:30:43.982686 | 2026-01-01 04:30:44.159132 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-01 04:30:44.160220 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-01 04:30:45.000271 | 2026-01-01 04:30:45.000519 | PLAY [Cleanup play] 2026-01-01 04:30:45.030687 | 2026-01-01 04:30:45.030943 | TASK [Set cloud fact (Zuul deployment)] 2026-01-01 04:30:45.085853 | orchestrator | ok 2026-01-01 04:30:45.095310 | 2026-01-01 04:30:45.095681 | TASK [Set cloud fact (local deployment)] 2026-01-01 04:30:45.141447 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:45.161225 | 2026-01-01 04:30:45.161401 | TASK [Clean the cloud environment] 2026-01-01 04:30:46.388769 | orchestrator | 2026-01-01 04:30:46 - clean up servers 2026-01-01 04:30:47.008638 | orchestrator | 2026-01-01 04:30:47 - clean up keypairs 2026-01-01 04:30:47.029794 | orchestrator | 2026-01-01 04:30:47 - wait for servers to be gone 2026-01-01 04:30:47.079123 | orchestrator | 2026-01-01 04:30:47 - clean up ports 2026-01-01 04:30:47.159425 | orchestrator | 2026-01-01 04:30:47 - clean up volumes 2026-01-01 04:30:47.237035 | orchestrator | 2026-01-01 04:30:47 - disconnect routers 2026-01-01 04:30:47.268509 | orchestrator | 2026-01-01 04:30:47 - clean up subnets 2026-01-01 04:30:47.293646 | orchestrator | 2026-01-01 04:30:47 - clean up networks 2026-01-01 04:30:47.457894 | orchestrator | 2026-01-01 04:30:47 - clean up security groups 2026-01-01 04:30:47.509101 | orchestrator | 2026-01-01 04:30:47 - clean up floating ips 2026-01-01 04:30:47.535855 | orchestrator | 2026-01-01 04:30:47 - clean up routers 2026-01-01 04:30:47.728110 | orchestrator | ok: Runtime: 0:00:01.582661 2026-01-01 04:30:47.730000 | 2026-01-01 04:30:47.730086 | PLAY RECAP 2026-01-01 04:30:47.730141 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-01 04:30:47.730167 | 2026-01-01 04:30:47.919680 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-01 04:30:47.921951 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-01 04:30:48.744593 | 2026-01-01 04:30:48.744752 | PLAY [Base post-fetch] 2026-01-01 04:30:48.761883 | 2026-01-01 04:30:48.762038 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-01 04:30:48.821256 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:48.841060 | 2026-01-01 04:30:48.841316 | TASK [fetch-output : Set log path for single node] 2026-01-01 04:30:48.923944 | orchestrator | ok 2026-01-01 04:30:48.930238 | 2026-01-01 04:30:48.930411 | LOOP [fetch-output : Ensure local output dirs] 2026-01-01 04:30:49.430772 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/f24727ee07344a4ebbbc3849d4298a4b/work/logs" 2026-01-01 04:30:49.724593 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f24727ee07344a4ebbbc3849d4298a4b/work/artifacts" 2026-01-01 04:30:50.025185 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f24727ee07344a4ebbbc3849d4298a4b/work/docs" 2026-01-01 04:30:50.042197 | 2026-01-01 04:30:50.042469 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-01 04:30:51.044209 | orchestrator | changed: .d..t...... ./ 2026-01-01 04:30:51.044868 | orchestrator | changed: All items complete 2026-01-01 04:30:51.044928 | 2026-01-01 04:30:51.831116 | orchestrator | changed: .d..t...... ./ 2026-01-01 04:30:52.624488 | orchestrator | changed: .d..t...... ./ 2026-01-01 04:30:52.648375 | 2026-01-01 04:30:52.648515 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-01 04:30:52.681646 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:52.687074 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:52.707254 | 2026-01-01 04:30:52.707436 | PLAY RECAP 2026-01-01 04:30:52.707515 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-01 04:30:52.707552 | 2026-01-01 04:30:52.848667 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-01 04:30:52.851650 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-01 04:30:53.628363 | 2026-01-01 04:30:53.628536 | PLAY [Base post] 2026-01-01 04:30:53.644593 | 2026-01-01 04:30:53.644786 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-01 04:30:54.723314 | orchestrator | changed 2026-01-01 04:30:54.735925 | 2026-01-01 04:30:54.736082 | PLAY RECAP 2026-01-01 04:30:54.736172 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-01 04:30:54.736263 | 2026-01-01 04:30:54.877064 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-01 04:30:54.879648 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-01 04:30:55.714014 | 2026-01-01 04:30:55.714185 | PLAY [Base post-logs] 2026-01-01 04:30:55.725077 | 2026-01-01 04:30:55.725218 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-01 04:30:56.196828 | localhost | changed 2026-01-01 04:30:56.216689 | 2026-01-01 04:30:56.216983 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-01 04:30:56.254961 | localhost | ok 2026-01-01 04:30:56.261578 | 2026-01-01 04:30:56.261721 | TASK [Set zuul-log-path fact] 2026-01-01 04:30:56.290482 | localhost | ok 2026-01-01 04:30:56.300276 | 2026-01-01 04:30:56.300423 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-01 04:30:56.336076 | localhost | ok 2026-01-01 04:30:56.339761 | 2026-01-01 04:30:56.339882 | TASK [upload-logs : Create log directories] 2026-01-01 04:30:56.854434 | localhost | changed 2026-01-01 04:30:56.857390 | 2026-01-01 04:30:56.857505 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-01 04:30:57.407945 | localhost -> localhost | ok: Runtime: 0:00:00.007319 2026-01-01 04:30:57.412744 | 2026-01-01 04:30:57.412873 | TASK [upload-logs : Upload logs to log server] 2026-01-01 04:30:57.999706 | localhost | Output suppressed because no_log was given 2026-01-01 04:30:58.002472 | 2026-01-01 04:30:58.002622 | LOOP [upload-logs : Compress console log and json output] 2026-01-01 04:30:58.052683 | localhost | skipping: Conditional result was False 2026-01-01 04:30:58.063911 | localhost | skipping: Conditional result was False 2026-01-01 04:30:58.071894 | 2026-01-01 04:30:58.072052 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-01 04:30:58.133317 | localhost | skipping: Conditional result was False 2026-01-01 04:30:58.133931 | 2026-01-01 04:30:58.139512 | localhost | skipping: Conditional result was False 2026-01-01 04:30:58.152488 | 2026-01-01 04:30:58.152672 | LOOP [upload-logs : Upload console log and json output]